00:00:00.001 Started by upstream project "autotest-nightly" build number 4367 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3730 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.142 Using shallow fetch with depth 1 00:00:00.142 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.142 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.207 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.207 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.663 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.675 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.688 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.688 > git config core.sparsecheckout # timeout=10 00:00:06.698 > git read-tree -mu HEAD # timeout=10 00:00:06.713 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.732 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.732 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.821 [Pipeline] Start of Pipeline 00:00:06.834 [Pipeline] library 00:00:06.836 Loading library shm_lib@master 00:00:06.836 Library shm_lib@master is cached. Copying from home. 00:00:06.853 [Pipeline] node 00:00:06.876 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.878 [Pipeline] { 00:00:06.889 [Pipeline] catchError 00:00:06.890 [Pipeline] { 00:00:06.902 [Pipeline] wrap 00:00:06.910 [Pipeline] { 00:00:06.918 [Pipeline] stage 00:00:06.919 [Pipeline] { (Prologue) 00:00:06.937 [Pipeline] echo 00:00:06.939 Node: VM-host-SM9 00:00:06.944 [Pipeline] cleanWs 00:00:06.954 [WS-CLEANUP] Deleting project workspace... 00:00:06.954 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.960 [WS-CLEANUP] done 00:00:07.157 [Pipeline] setCustomBuildProperty 00:00:07.234 [Pipeline] httpRequest 00:00:07.674 [Pipeline] echo 00:00:07.676 Sorcerer 10.211.164.20 is alive 00:00:07.683 [Pipeline] retry 00:00:07.685 [Pipeline] { 00:00:07.695 [Pipeline] httpRequest 00:00:07.699 HttpMethod: GET 00:00:07.700 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.700 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.701 Response Code: HTTP/1.1 200 OK 00:00:07.702 Success: Status code 200 is in the accepted range: 200,404 00:00:07.702 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.654 [Pipeline] } 00:00:08.665 [Pipeline] // retry 00:00:08.670 [Pipeline] sh 00:00:08.950 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.965 [Pipeline] httpRequest 00:00:09.381 [Pipeline] echo 00:00:09.383 Sorcerer 10.211.164.20 is alive 00:00:09.392 [Pipeline] retry 00:00:09.394 [Pipeline] { 00:00:09.408 [Pipeline] httpRequest 00:00:09.412 HttpMethod: GET 00:00:09.413 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:09.413 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:09.433 Response Code: HTTP/1.1 200 OK 00:00:09.433 Success: Status code 200 is in the accepted range: 200,404 00:00:09.434 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:48.004 [Pipeline] } 00:00:48.021 [Pipeline] // retry 00:00:48.028 [Pipeline] sh 00:00:48.306 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:51.604 [Pipeline] sh 00:00:51.884 + git -C spdk log --oneline -n5 00:00:51.884 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:51.884 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:51.884 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:51.884 66289a6db build: use VERSION file for storing version 00:00:51.884 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:51.899 [Pipeline] writeFile 00:00:51.911 [Pipeline] sh 00:00:52.188 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:52.199 [Pipeline] sh 00:00:52.476 + cat autorun-spdk.conf 00:00:52.476 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.476 SPDK_TEST_NVMF=1 00:00:52.476 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.476 SPDK_TEST_URING=1 00:00:52.476 SPDK_TEST_VFIOUSER=1 00:00:52.476 SPDK_TEST_USDT=1 00:00:52.476 SPDK_RUN_ASAN=1 00:00:52.476 SPDK_RUN_UBSAN=1 00:00:52.476 NET_TYPE=virt 00:00:52.476 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.483 RUN_NIGHTLY=1 00:00:52.485 [Pipeline] } 00:00:52.498 [Pipeline] // stage 00:00:52.510 [Pipeline] stage 00:00:52.512 [Pipeline] { (Run VM) 00:00:52.524 [Pipeline] sh 00:00:52.803 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:52.803 + echo 'Start stage prepare_nvme.sh' 00:00:52.803 Start stage prepare_nvme.sh 00:00:52.803 + [[ -n 3 ]] 00:00:52.803 + disk_prefix=ex3 00:00:52.803 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:52.803 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:52.803 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:52.803 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.803 ++ SPDK_TEST_NVMF=1 00:00:52.803 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.803 ++ SPDK_TEST_URING=1 00:00:52.803 ++ SPDK_TEST_VFIOUSER=1 00:00:52.803 ++ SPDK_TEST_USDT=1 00:00:52.803 ++ SPDK_RUN_ASAN=1 00:00:52.803 ++ SPDK_RUN_UBSAN=1 00:00:52.803 ++ NET_TYPE=virt 00:00:52.803 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.803 ++ RUN_NIGHTLY=1 00:00:52.803 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:52.803 + nvme_files=() 00:00:52.803 + declare -A nvme_files 00:00:52.803 + backend_dir=/var/lib/libvirt/images/backends 00:00:52.803 + nvme_files['nvme.img']=5G 00:00:52.803 + nvme_files['nvme-cmb.img']=5G 00:00:52.803 + nvme_files['nvme-multi0.img']=4G 00:00:52.803 + nvme_files['nvme-multi1.img']=4G 00:00:52.803 + nvme_files['nvme-multi2.img']=4G 00:00:52.803 + nvme_files['nvme-openstack.img']=8G 00:00:52.803 + nvme_files['nvme-zns.img']=5G 00:00:52.803 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:52.803 + (( SPDK_TEST_FTL == 1 )) 00:00:52.803 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:52.803 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:52.803 + for nvme in "${!nvme_files[@]}" 00:00:52.803 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:52.803 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.803 + for nvme in "${!nvme_files[@]}" 00:00:52.803 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:52.803 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.803 + for nvme in "${!nvme_files[@]}" 00:00:52.803 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:52.803 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:52.803 + for nvme in "${!nvme_files[@]}" 00:00:52.803 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:52.803 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.803 + for nvme in "${!nvme_files[@]}" 00:00:52.803 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:52.803 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.803 + for nvme in "${!nvme_files[@]}" 00:00:52.803 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:52.803 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.803 + for nvme in "${!nvme_files[@]}" 00:00:52.803 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:53.062 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.062 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:53.062 + echo 'End stage prepare_nvme.sh' 00:00:53.062 End stage prepare_nvme.sh 00:00:53.073 [Pipeline] sh 00:00:53.408 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.408 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:53.408 00:00:53.408 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:53.408 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:53.408 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:53.408 HELP=0 00:00:53.408 DRY_RUN=0 00:00:53.408 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:53.408 NVME_DISKS_TYPE=nvme,nvme, 00:00:53.408 NVME_AUTO_CREATE=0 00:00:53.408 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:53.408 NVME_CMB=,, 00:00:53.408 NVME_PMR=,, 00:00:53.408 NVME_ZNS=,, 00:00:53.408 NVME_MS=,, 00:00:53.408 NVME_FDP=,, 00:00:53.408 SPDK_VAGRANT_DISTRO=fedora39 00:00:53.408 SPDK_VAGRANT_VMCPU=10 00:00:53.408 SPDK_VAGRANT_VMRAM=12288 00:00:53.408 SPDK_VAGRANT_PROVIDER=libvirt 00:00:53.408 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:53.408 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:53.408 SPDK_OPENSTACK_NETWORK=0 00:00:53.408 VAGRANT_PACKAGE_BOX=0 00:00:53.408 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:53.408 FORCE_DISTRO=true 00:00:53.408 VAGRANT_BOX_VERSION= 00:00:53.408 EXTRA_VAGRANTFILES= 00:00:53.408 NIC_MODEL=e1000 00:00:53.408 00:00:53.408 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:53.408 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:55.939 Bringing machine 'default' up with 'libvirt' provider... 00:00:56.504 ==> default: Creating image (snapshot of base box volume). 00:00:56.504 ==> default: Creating domain with the following settings... 00:00:56.504 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734326316_3781cb77561104b77784 00:00:56.504 ==> default: -- Domain type: kvm 00:00:56.504 ==> default: -- Cpus: 10 00:00:56.504 ==> default: -- Feature: acpi 00:00:56.504 ==> default: -- Feature: apic 00:00:56.504 ==> default: -- Feature: pae 00:00:56.504 ==> default: -- Memory: 12288M 00:00:56.504 ==> default: -- Memory Backing: hugepages: 00:00:56.504 ==> default: -- Management MAC: 00:00:56.504 ==> default: -- Loader: 00:00:56.504 ==> default: -- Nvram: 00:00:56.504 ==> default: -- Base box: spdk/fedora39 00:00:56.504 ==> default: -- Storage pool: default 00:00:56.504 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734326316_3781cb77561104b77784.img (20G) 00:00:56.504 ==> default: -- Volume Cache: default 00:00:56.504 ==> default: -- Kernel: 00:00:56.504 ==> default: -- Initrd: 00:00:56.504 ==> default: -- Graphics Type: vnc 00:00:56.504 ==> default: -- Graphics Port: -1 00:00:56.504 ==> default: -- Graphics IP: 127.0.0.1 00:00:56.504 ==> default: -- Graphics Password: Not defined 00:00:56.504 ==> default: -- Video Type: cirrus 00:00:56.504 ==> default: -- Video VRAM: 9216 00:00:56.504 ==> default: -- Sound Type: 00:00:56.504 ==> default: -- Keymap: en-us 00:00:56.504 ==> default: -- TPM Path: 00:00:56.504 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:56.504 ==> default: -- Command line args: 00:00:56.504 ==> default: -> value=-device, 00:00:56.504 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:56.504 ==> default: -> value=-drive, 00:00:56.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:56.504 ==> default: -> value=-device, 00:00:56.504 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.504 ==> default: -> value=-device, 00:00:56.504 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:56.504 ==> default: -> value=-drive, 00:00:56.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:56.504 ==> default: -> value=-device, 00:00:56.504 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.504 ==> default: -> value=-drive, 00:00:56.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:56.504 ==> default: -> value=-device, 00:00:56.504 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.504 ==> default: -> value=-drive, 00:00:56.504 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:56.504 ==> default: -> value=-device, 00:00:56.504 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:56.762 ==> default: Creating shared folders metadata... 00:00:56.762 ==> default: Starting domain. 00:00:58.140 ==> default: Waiting for domain to get an IP address... 00:01:16.281 ==> default: Waiting for SSH to become available... 00:01:16.281 ==> default: Configuring and enabling network interfaces... 00:01:18.816 default: SSH address: 192.168.121.195:22 00:01:18.816 default: SSH username: vagrant 00:01:18.816 default: SSH auth method: private key 00:01:20.720 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:28.837 ==> default: Mounting SSHFS shared folder... 00:01:30.214 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:30.214 ==> default: Checking Mount.. 00:01:31.594 ==> default: Folder Successfully Mounted! 00:01:31.594 ==> default: Running provisioner: file... 00:01:32.161 default: ~/.gitconfig => .gitconfig 00:01:32.729 00:01:32.729 SUCCESS! 00:01:32.729 00:01:32.729 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:32.729 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:32.729 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:32.729 00:01:32.738 [Pipeline] } 00:01:32.752 [Pipeline] // stage 00:01:32.761 [Pipeline] dir 00:01:32.761 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:32.763 [Pipeline] { 00:01:32.775 [Pipeline] catchError 00:01:32.777 [Pipeline] { 00:01:32.790 [Pipeline] sh 00:01:33.075 + vagrant ssh-config --host vagrant 00:01:33.075 + sed -ne /^Host/,$p 00:01:33.075 + tee ssh_conf 00:01:37.264 Host vagrant 00:01:37.264 HostName 192.168.121.195 00:01:37.264 User vagrant 00:01:37.264 Port 22 00:01:37.264 UserKnownHostsFile /dev/null 00:01:37.264 StrictHostKeyChecking no 00:01:37.264 PasswordAuthentication no 00:01:37.264 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:37.264 IdentitiesOnly yes 00:01:37.264 LogLevel FATAL 00:01:37.264 ForwardAgent yes 00:01:37.264 ForwardX11 yes 00:01:37.264 00:01:37.278 [Pipeline] withEnv 00:01:37.280 [Pipeline] { 00:01:37.294 [Pipeline] sh 00:01:37.574 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:37.574 source /etc/os-release 00:01:37.574 [[ -e /image.version ]] && img=$(< /image.version) 00:01:37.574 # Minimal, systemd-like check. 00:01:37.574 if [[ -e /.dockerenv ]]; then 00:01:37.574 # Clear garbage from the node's name: 00:01:37.574 # agt-er_autotest_547-896 -> autotest_547-896 00:01:37.574 # $HOSTNAME is the actual container id 00:01:37.574 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:37.574 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:37.574 # We can assume this is a mount from a host where container is running, 00:01:37.574 # so fetch its hostname to easily identify the target swarm worker. 00:01:37.574 container="$(< /etc/hostname) ($agent)" 00:01:37.574 else 00:01:37.574 # Fallback 00:01:37.574 container=$agent 00:01:37.574 fi 00:01:37.574 fi 00:01:37.574 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:37.574 00:01:37.586 [Pipeline] } 00:01:37.602 [Pipeline] // withEnv 00:01:37.610 [Pipeline] setCustomBuildProperty 00:01:37.625 [Pipeline] stage 00:01:37.628 [Pipeline] { (Tests) 00:01:37.644 [Pipeline] sh 00:01:37.924 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:38.196 [Pipeline] sh 00:01:38.476 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:38.750 [Pipeline] timeout 00:01:38.750 Timeout set to expire in 1 hr 0 min 00:01:38.752 [Pipeline] { 00:01:38.767 [Pipeline] sh 00:01:39.047 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:39.615 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:39.627 [Pipeline] sh 00:01:39.907 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:40.180 [Pipeline] sh 00:01:40.495 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:40.510 [Pipeline] sh 00:01:40.792 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:40.792 ++ readlink -f spdk_repo 00:01:40.792 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:40.792 + [[ -n /home/vagrant/spdk_repo ]] 00:01:40.792 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:40.792 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:40.792 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:40.792 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:40.792 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:40.792 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:40.792 + cd /home/vagrant/spdk_repo 00:01:40.792 + source /etc/os-release 00:01:40.792 ++ NAME='Fedora Linux' 00:01:40.792 ++ VERSION='39 (Cloud Edition)' 00:01:40.792 ++ ID=fedora 00:01:40.792 ++ VERSION_ID=39 00:01:40.792 ++ VERSION_CODENAME= 00:01:40.792 ++ PLATFORM_ID=platform:f39 00:01:40.792 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:40.792 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.792 ++ LOGO=fedora-logo-icon 00:01:40.792 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:40.792 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.792 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:40.792 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.792 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.792 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.792 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:40.792 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.792 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:40.792 ++ SUPPORT_END=2024-11-12 00:01:40.792 ++ VARIANT='Cloud Edition' 00:01:40.792 ++ VARIANT_ID=cloud 00:01:40.792 + uname -a 00:01:40.792 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:40.792 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:41.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:41.360 Hugepages 00:01:41.360 node hugesize free / total 00:01:41.360 node0 1048576kB 0 / 0 00:01:41.360 node0 2048kB 0 / 0 00:01:41.360 00:01:41.360 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:41.360 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:41.360 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:41.361 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:41.361 + rm -f /tmp/spdk-ld-path 00:01:41.361 + source autorun-spdk.conf 00:01:41.361 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.361 ++ SPDK_TEST_NVMF=1 00:01:41.361 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.361 ++ SPDK_TEST_URING=1 00:01:41.361 ++ SPDK_TEST_VFIOUSER=1 00:01:41.361 ++ SPDK_TEST_USDT=1 00:01:41.361 ++ SPDK_RUN_ASAN=1 00:01:41.361 ++ SPDK_RUN_UBSAN=1 00:01:41.361 ++ NET_TYPE=virt 00:01:41.361 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.361 ++ RUN_NIGHTLY=1 00:01:41.361 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:41.361 + [[ -n '' ]] 00:01:41.361 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:41.620 + for M in /var/spdk/build-*-manifest.txt 00:01:41.620 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:41.620 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.620 + for M in /var/spdk/build-*-manifest.txt 00:01:41.620 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:41.620 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.620 + for M in /var/spdk/build-*-manifest.txt 00:01:41.620 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:41.620 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:41.620 ++ uname 00:01:41.620 + [[ Linux == \L\i\n\u\x ]] 00:01:41.620 + sudo dmesg -T 00:01:41.620 + sudo dmesg --clear 00:01:41.620 + dmesg_pid=5252 00:01:41.620 + sudo dmesg -Tw 00:01:41.620 + [[ Fedora Linux == FreeBSD ]] 00:01:41.620 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.620 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.620 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.620 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.620 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.620 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.620 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.620 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.620 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.620 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.620 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.620 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.620 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.620 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.620 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.620 05:19:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:41.620 05:19:21 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.620 05:19:21 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:41.620 05:19:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:41.620 05:19:21 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.620 05:19:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:41.620 05:19:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:41.620 05:19:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:41.620 05:19:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.620 05:19:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.620 05:19:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.620 05:19:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.621 05:19:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.621 05:19:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.621 05:19:21 -- paths/export.sh@5 -- $ export PATH 00:01:41.621 05:19:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.621 05:19:21 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:41.621 05:19:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:41.621 05:19:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734326361.XXXXXX 00:01:41.621 05:19:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734326361.9ZcgrF 00:01:41.621 05:19:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:41.621 05:19:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:41.621 05:19:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:41.621 05:19:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:41.621 05:19:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.621 05:19:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:41.621 05:19:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:41.621 05:19:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.880 05:19:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:41.880 05:19:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:41.880 05:19:21 -- pm/common@17 -- $ local monitor 00:01:41.880 05:19:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.880 05:19:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.880 05:19:21 -- pm/common@25 -- $ sleep 1 00:01:41.880 05:19:21 -- pm/common@21 -- $ date +%s 00:01:41.880 05:19:21 -- pm/common@21 -- $ date +%s 00:01:41.880 05:19:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734326361 00:01:41.880 05:19:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734326361 00:01:41.880 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734326361_collect-cpu-load.pm.log 00:01:41.880 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734326361_collect-vmstat.pm.log 00:01:42.818 05:19:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:42.818 05:19:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:42.818 05:19:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:42.818 05:19:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:42.818 05:19:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:42.818 Mon Dec 16 05:19:22 AM UTC 2024 00:01:42.818 05:19:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:42.818 v25.01-rc1-2-ge01cb43b8 00:01:42.818 05:19:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:42.818 05:19:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:42.818 05:19:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.818 05:19:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.818 05:19:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.818 ************************************ 00:01:42.818 START TEST asan 00:01:42.818 ************************************ 00:01:42.818 using asan 00:01:42.818 05:19:22 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:42.818 00:01:42.818 real 0m0.000s 00:01:42.818 user 0m0.000s 00:01:42.818 sys 0m0.000s 00:01:42.818 05:19:22 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.818 05:19:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.818 ************************************ 00:01:42.818 END TEST asan 00:01:42.818 ************************************ 00:01:42.818 05:19:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:42.818 05:19:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:42.818 05:19:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.818 05:19:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.818 05:19:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.818 ************************************ 00:01:42.818 START TEST ubsan 00:01:42.818 ************************************ 00:01:42.818 using ubsan 00:01:42.818 05:19:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:42.818 00:01:42.818 real 0m0.000s 00:01:42.818 user 0m0.000s 00:01:42.818 sys 0m0.000s 00:01:42.818 05:19:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.818 05:19:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.818 ************************************ 00:01:42.818 END TEST ubsan 00:01:42.818 ************************************ 00:01:42.818 05:19:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:42.818 05:19:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:42.818 05:19:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:42.818 05:19:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:42.818 05:19:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:42.818 05:19:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:42.818 05:19:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:42.818 05:19:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:42.818 05:19:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:43.077 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:43.077 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:43.645 Using 'verbs' RDMA provider 00:01:56.786 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:11.689 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:11.689 Creating mk/config.mk...done. 00:02:11.689 Creating mk/cc.flags.mk...done. 00:02:11.689 Type 'make' to build. 00:02:11.689 05:19:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:11.689 05:19:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.689 05:19:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.689 05:19:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.689 ************************************ 00:02:11.689 START TEST make 00:02:11.689 ************************************ 00:02:11.689 05:19:50 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:11.983 The Meson build system 00:02:11.983 Version: 1.5.0 00:02:11.983 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:11.983 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:11.983 Build type: native build 00:02:11.983 Project name: libvfio-user 00:02:11.983 Project version: 0.0.1 00:02:11.983 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.983 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.983 Host machine cpu family: x86_64 00:02:11.983 Host machine cpu: x86_64 00:02:11.983 Run-time dependency threads found: YES 00:02:11.983 Library dl found: YES 00:02:11.983 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.983 Run-time dependency json-c found: YES 0.17 00:02:11.983 Run-time dependency cmocka found: YES 1.1.7 00:02:11.983 Program pytest-3 found: NO 00:02:11.983 Program flake8 found: NO 00:02:11.983 Program misspell-fixer found: NO 00:02:11.983 Program restructuredtext-lint found: NO 00:02:11.983 Program valgrind found: YES (/usr/bin/valgrind) 00:02:11.983 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.983 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.983 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.983 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.983 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:11.983 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:11.983 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:11.983 Build targets in project: 8 00:02:11.983 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:11.983 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:11.983 00:02:11.983 libvfio-user 0.0.1 00:02:11.983 00:02:11.983 User defined options 00:02:11.983 buildtype : debug 00:02:11.983 default_library: shared 00:02:11.983 libdir : /usr/local/lib 00:02:11.983 00:02:11.983 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.549 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:12.807 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:12.808 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:12.808 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:12.808 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:12.808 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:12.808 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:12.808 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:12.808 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:12.808 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:12.808 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:12.808 [11/37] Compiling C object samples/null.p/null.c.o 00:02:12.808 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:12.808 [13/37] Compiling C object samples/client.p/client.c.o 00:02:12.808 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:12.808 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:12.808 [16/37] Compiling C object samples/server.p/server.c.o 00:02:12.808 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:13.066 [18/37] Linking target samples/client 00:02:13.066 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:13.066 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:13.066 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:13.066 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:13.066 [23/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:13.066 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:13.066 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:13.066 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:13.066 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:02:13.066 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:13.066 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:13.066 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:13.066 [31/37] Linking target test/unit_tests 00:02:13.324 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:13.324 [33/37] Linking target samples/server 00:02:13.324 [34/37] Linking target samples/lspci 00:02:13.324 [35/37] Linking target samples/null 00:02:13.324 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:13.324 [37/37] Linking target samples/gpio-pci-idio-16 00:02:13.324 INFO: autodetecting backend as ninja 00:02:13.324 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:13.324 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:13.890 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:13.890 ninja: no work to do. 00:02:23.858 The Meson build system 00:02:23.858 Version: 1.5.0 00:02:23.858 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.858 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.858 Build type: native build 00:02:23.858 Program cat found: YES (/usr/bin/cat) 00:02:23.858 Project name: DPDK 00:02:23.858 Project version: 24.03.0 00:02:23.858 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.858 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.858 Host machine cpu family: x86_64 00:02:23.858 Host machine cpu: x86_64 00:02:23.858 Message: ## Building in Developer Mode ## 00:02:23.858 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.858 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.858 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.858 Program python3 found: YES (/usr/bin/python3) 00:02:23.858 Program cat found: YES (/usr/bin/cat) 00:02:23.858 Compiler for C supports arguments -march=native: YES 00:02:23.858 Checking for size of "void *" : 8 00:02:23.858 Checking for size of "void *" : 8 (cached) 00:02:23.858 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.858 Library m found: YES 00:02:23.858 Library numa found: YES 00:02:23.858 Has header "numaif.h" : YES 00:02:23.858 Library fdt found: NO 00:02:23.858 Library execinfo found: NO 00:02:23.858 Has header "execinfo.h" : YES 00:02:23.858 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.858 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.858 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.858 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.858 Run-time dependency openssl found: YES 3.1.1 00:02:23.858 Run-time dependency libpcap found: YES 1.10.4 00:02:23.858 Has header "pcap.h" with dependency libpcap: YES 00:02:23.858 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.858 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.858 Compiler for C supports arguments -Wformat: YES 00:02:23.858 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.858 Compiler for C supports arguments -Wformat-security: NO 00:02:23.858 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.858 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.858 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.858 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.858 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.858 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.858 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.858 Compiler for C supports arguments -Wundef: YES 00:02:23.858 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.858 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.858 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.858 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.858 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.858 Program objdump found: YES (/usr/bin/objdump) 00:02:23.858 Compiler for C supports arguments -mavx512f: YES 00:02:23.858 Checking if "AVX512 checking" compiles: YES 00:02:23.858 Fetching value of define "__SSE4_2__" : 1 00:02:23.858 Fetching value of define "__AES__" : 1 00:02:23.858 Fetching value of define "__AVX__" : 1 00:02:23.858 Fetching value of define "__AVX2__" : 1 00:02:23.858 Fetching value of define "__AVX512BW__" : (undefined) 00:02:23.858 Fetching value of define "__AVX512CD__" : (undefined) 00:02:23.858 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:23.858 Fetching value of define "__AVX512F__" : (undefined) 00:02:23.858 Fetching value of define "__AVX512VL__" : (undefined) 00:02:23.858 Fetching value of define "__PCLMUL__" : 1 00:02:23.858 Fetching value of define "__RDRND__" : 1 00:02:23.858 Fetching value of define "__RDSEED__" : 1 00:02:23.858 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.858 Fetching value of define "__znver1__" : (undefined) 00:02:23.858 Fetching value of define "__znver2__" : (undefined) 00:02:23.858 Fetching value of define "__znver3__" : (undefined) 00:02:23.858 Fetching value of define "__znver4__" : (undefined) 00:02:23.858 Library asan found: YES 00:02:23.858 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.858 Message: lib/log: Defining dependency "log" 00:02:23.858 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.858 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.858 Library rt found: YES 00:02:23.858 Checking for function "getentropy" : NO 00:02:23.858 Message: lib/eal: Defining dependency "eal" 00:02:23.858 Message: lib/ring: Defining dependency "ring" 00:02:23.858 Message: lib/rcu: Defining dependency "rcu" 00:02:23.858 Message: lib/mempool: Defining dependency "mempool" 00:02:23.859 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.859 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.859 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:23.859 Compiler for C supports arguments -mpclmul: YES 00:02:23.859 Compiler for C supports arguments -maes: YES 00:02:23.859 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.859 Compiler for C supports arguments -mavx512bw: YES 00:02:23.859 Compiler for C supports arguments -mavx512dq: YES 00:02:23.859 Compiler for C supports arguments -mavx512vl: YES 00:02:23.859 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.859 Compiler for C supports arguments -mavx2: YES 00:02:23.859 Compiler for C supports arguments -mavx: YES 00:02:23.859 Message: lib/net: Defining dependency "net" 00:02:23.859 Message: lib/meter: Defining dependency "meter" 00:02:23.859 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.859 Message: lib/pci: Defining dependency "pci" 00:02:23.859 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.859 Message: lib/hash: Defining dependency "hash" 00:02:23.859 Message: lib/timer: Defining dependency "timer" 00:02:23.859 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.859 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.859 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.859 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.859 Message: lib/power: Defining dependency "power" 00:02:23.859 Message: lib/reorder: Defining dependency "reorder" 00:02:23.859 Message: lib/security: Defining dependency "security" 00:02:23.859 Has header "linux/userfaultfd.h" : YES 00:02:23.859 Has header "linux/vduse.h" : YES 00:02:23.859 Message: lib/vhost: Defining dependency "vhost" 00:02:23.859 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.859 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.859 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.859 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.859 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.859 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.859 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.859 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.859 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.859 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.859 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.859 Configuring doxy-api-html.conf using configuration 00:02:23.859 Configuring doxy-api-man.conf using configuration 00:02:23.859 Program mandb found: YES (/usr/bin/mandb) 00:02:23.859 Program sphinx-build found: NO 00:02:23.859 Configuring rte_build_config.h using configuration 00:02:23.859 Message: 00:02:23.859 ================= 00:02:23.859 Applications Enabled 00:02:23.859 ================= 00:02:23.859 00:02:23.859 apps: 00:02:23.859 00:02:23.859 00:02:23.859 Message: 00:02:23.859 ================= 00:02:23.859 Libraries Enabled 00:02:23.859 ================= 00:02:23.859 00:02:23.859 libs: 00:02:23.859 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.859 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.859 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.859 00:02:23.859 Message: 00:02:23.859 =============== 00:02:23.859 Drivers Enabled 00:02:23.859 =============== 00:02:23.859 00:02:23.859 common: 00:02:23.859 00:02:23.859 bus: 00:02:23.859 pci, vdev, 00:02:23.859 mempool: 00:02:23.859 ring, 00:02:23.859 dma: 00:02:23.859 00:02:23.859 net: 00:02:23.859 00:02:23.859 crypto: 00:02:23.859 00:02:23.859 compress: 00:02:23.859 00:02:23.859 vdpa: 00:02:23.859 00:02:23.859 00:02:23.859 Message: 00:02:23.859 ================= 00:02:23.859 Content Skipped 00:02:23.859 ================= 00:02:23.859 00:02:23.859 apps: 00:02:23.859 dumpcap: explicitly disabled via build config 00:02:23.859 graph: explicitly disabled via build config 00:02:23.859 pdump: explicitly disabled via build config 00:02:23.859 proc-info: explicitly disabled via build config 00:02:23.859 test-acl: explicitly disabled via build config 00:02:23.859 test-bbdev: explicitly disabled via build config 00:02:23.859 test-cmdline: explicitly disabled via build config 00:02:23.859 test-compress-perf: explicitly disabled via build config 00:02:23.859 test-crypto-perf: explicitly disabled via build config 00:02:23.859 test-dma-perf: explicitly disabled via build config 00:02:23.859 test-eventdev: explicitly disabled via build config 00:02:23.859 test-fib: explicitly disabled via build config 00:02:23.859 test-flow-perf: explicitly disabled via build config 00:02:23.859 test-gpudev: explicitly disabled via build config 00:02:23.859 test-mldev: explicitly disabled via build config 00:02:23.859 test-pipeline: explicitly disabled via build config 00:02:23.859 test-pmd: explicitly disabled via build config 00:02:23.859 test-regex: explicitly disabled via build config 00:02:23.859 test-sad: explicitly disabled via build config 00:02:23.859 test-security-perf: explicitly disabled via build config 00:02:23.859 00:02:23.859 libs: 00:02:23.859 argparse: explicitly disabled via build config 00:02:23.859 metrics: explicitly disabled via build config 00:02:23.859 acl: explicitly disabled via build config 00:02:23.859 bbdev: explicitly disabled via build config 00:02:23.859 bitratestats: explicitly disabled via build config 00:02:23.859 bpf: explicitly disabled via build config 00:02:23.859 cfgfile: explicitly disabled via build config 00:02:23.859 distributor: explicitly disabled via build config 00:02:23.859 efd: explicitly disabled via build config 00:02:23.859 eventdev: explicitly disabled via build config 00:02:23.859 dispatcher: explicitly disabled via build config 00:02:23.859 gpudev: explicitly disabled via build config 00:02:23.859 gro: explicitly disabled via build config 00:02:23.859 gso: explicitly disabled via build config 00:02:23.859 ip_frag: explicitly disabled via build config 00:02:23.859 jobstats: explicitly disabled via build config 00:02:23.859 latencystats: explicitly disabled via build config 00:02:23.859 lpm: explicitly disabled via build config 00:02:23.859 member: explicitly disabled via build config 00:02:23.859 pcapng: explicitly disabled via build config 00:02:23.859 rawdev: explicitly disabled via build config 00:02:23.859 regexdev: explicitly disabled via build config 00:02:23.859 mldev: explicitly disabled via build config 00:02:23.859 rib: explicitly disabled via build config 00:02:23.859 sched: explicitly disabled via build config 00:02:23.859 stack: explicitly disabled via build config 00:02:23.859 ipsec: explicitly disabled via build config 00:02:23.859 pdcp: explicitly disabled via build config 00:02:23.859 fib: explicitly disabled via build config 00:02:23.859 port: explicitly disabled via build config 00:02:23.859 pdump: explicitly disabled via build config 00:02:23.859 table: explicitly disabled via build config 00:02:23.859 pipeline: explicitly disabled via build config 00:02:23.859 graph: explicitly disabled via build config 00:02:23.859 node: explicitly disabled via build config 00:02:23.859 00:02:23.859 drivers: 00:02:23.859 common/cpt: not in enabled drivers build config 00:02:23.859 common/dpaax: not in enabled drivers build config 00:02:23.859 common/iavf: not in enabled drivers build config 00:02:23.859 common/idpf: not in enabled drivers build config 00:02:23.859 common/ionic: not in enabled drivers build config 00:02:23.859 common/mvep: not in enabled drivers build config 00:02:23.859 common/octeontx: not in enabled drivers build config 00:02:23.859 bus/auxiliary: not in enabled drivers build config 00:02:23.859 bus/cdx: not in enabled drivers build config 00:02:23.859 bus/dpaa: not in enabled drivers build config 00:02:23.859 bus/fslmc: not in enabled drivers build config 00:02:23.859 bus/ifpga: not in enabled drivers build config 00:02:23.859 bus/platform: not in enabled drivers build config 00:02:23.859 bus/uacce: not in enabled drivers build config 00:02:23.859 bus/vmbus: not in enabled drivers build config 00:02:23.859 common/cnxk: not in enabled drivers build config 00:02:23.859 common/mlx5: not in enabled drivers build config 00:02:23.859 common/nfp: not in enabled drivers build config 00:02:23.859 common/nitrox: not in enabled drivers build config 00:02:23.859 common/qat: not in enabled drivers build config 00:02:23.859 common/sfc_efx: not in enabled drivers build config 00:02:23.859 mempool/bucket: not in enabled drivers build config 00:02:23.859 mempool/cnxk: not in enabled drivers build config 00:02:23.859 mempool/dpaa: not in enabled drivers build config 00:02:23.859 mempool/dpaa2: not in enabled drivers build config 00:02:23.859 mempool/octeontx: not in enabled drivers build config 00:02:23.859 mempool/stack: not in enabled drivers build config 00:02:23.859 dma/cnxk: not in enabled drivers build config 00:02:23.859 dma/dpaa: not in enabled drivers build config 00:02:23.859 dma/dpaa2: not in enabled drivers build config 00:02:23.859 dma/hisilicon: not in enabled drivers build config 00:02:23.859 dma/idxd: not in enabled drivers build config 00:02:23.859 dma/ioat: not in enabled drivers build config 00:02:23.859 dma/skeleton: not in enabled drivers build config 00:02:23.859 net/af_packet: not in enabled drivers build config 00:02:23.859 net/af_xdp: not in enabled drivers build config 00:02:23.859 net/ark: not in enabled drivers build config 00:02:23.859 net/atlantic: not in enabled drivers build config 00:02:23.859 net/avp: not in enabled drivers build config 00:02:23.859 net/axgbe: not in enabled drivers build config 00:02:23.859 net/bnx2x: not in enabled drivers build config 00:02:23.859 net/bnxt: not in enabled drivers build config 00:02:23.859 net/bonding: not in enabled drivers build config 00:02:23.859 net/cnxk: not in enabled drivers build config 00:02:23.859 net/cpfl: not in enabled drivers build config 00:02:23.859 net/cxgbe: not in enabled drivers build config 00:02:23.859 net/dpaa: not in enabled drivers build config 00:02:23.860 net/dpaa2: not in enabled drivers build config 00:02:23.860 net/e1000: not in enabled drivers build config 00:02:23.860 net/ena: not in enabled drivers build config 00:02:23.860 net/enetc: not in enabled drivers build config 00:02:23.860 net/enetfec: not in enabled drivers build config 00:02:23.860 net/enic: not in enabled drivers build config 00:02:23.860 net/failsafe: not in enabled drivers build config 00:02:23.860 net/fm10k: not in enabled drivers build config 00:02:23.860 net/gve: not in enabled drivers build config 00:02:23.860 net/hinic: not in enabled drivers build config 00:02:23.860 net/hns3: not in enabled drivers build config 00:02:23.860 net/i40e: not in enabled drivers build config 00:02:23.860 net/iavf: not in enabled drivers build config 00:02:23.860 net/ice: not in enabled drivers build config 00:02:23.860 net/idpf: not in enabled drivers build config 00:02:23.860 net/igc: not in enabled drivers build config 00:02:23.860 net/ionic: not in enabled drivers build config 00:02:23.860 net/ipn3ke: not in enabled drivers build config 00:02:23.860 net/ixgbe: not in enabled drivers build config 00:02:23.860 net/mana: not in enabled drivers build config 00:02:23.860 net/memif: not in enabled drivers build config 00:02:23.860 net/mlx4: not in enabled drivers build config 00:02:23.860 net/mlx5: not in enabled drivers build config 00:02:23.860 net/mvneta: not in enabled drivers build config 00:02:23.860 net/mvpp2: not in enabled drivers build config 00:02:23.860 net/netvsc: not in enabled drivers build config 00:02:23.860 net/nfb: not in enabled drivers build config 00:02:23.860 net/nfp: not in enabled drivers build config 00:02:23.860 net/ngbe: not in enabled drivers build config 00:02:23.860 net/null: not in enabled drivers build config 00:02:23.860 net/octeontx: not in enabled drivers build config 00:02:23.860 net/octeon_ep: not in enabled drivers build config 00:02:23.860 net/pcap: not in enabled drivers build config 00:02:23.860 net/pfe: not in enabled drivers build config 00:02:23.860 net/qede: not in enabled drivers build config 00:02:23.860 net/ring: not in enabled drivers build config 00:02:23.860 net/sfc: not in enabled drivers build config 00:02:23.860 net/softnic: not in enabled drivers build config 00:02:23.860 net/tap: not in enabled drivers build config 00:02:23.860 net/thunderx: not in enabled drivers build config 00:02:23.860 net/txgbe: not in enabled drivers build config 00:02:23.860 net/vdev_netvsc: not in enabled drivers build config 00:02:23.860 net/vhost: not in enabled drivers build config 00:02:23.860 net/virtio: not in enabled drivers build config 00:02:23.860 net/vmxnet3: not in enabled drivers build config 00:02:23.860 raw/*: missing internal dependency, "rawdev" 00:02:23.860 crypto/armv8: not in enabled drivers build config 00:02:23.860 crypto/bcmfs: not in enabled drivers build config 00:02:23.860 crypto/caam_jr: not in enabled drivers build config 00:02:23.860 crypto/ccp: not in enabled drivers build config 00:02:23.860 crypto/cnxk: not in enabled drivers build config 00:02:23.860 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.860 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.860 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.860 crypto/mlx5: not in enabled drivers build config 00:02:23.860 crypto/mvsam: not in enabled drivers build config 00:02:23.860 crypto/nitrox: not in enabled drivers build config 00:02:23.860 crypto/null: not in enabled drivers build config 00:02:23.860 crypto/octeontx: not in enabled drivers build config 00:02:23.860 crypto/openssl: not in enabled drivers build config 00:02:23.860 crypto/scheduler: not in enabled drivers build config 00:02:23.860 crypto/uadk: not in enabled drivers build config 00:02:23.860 crypto/virtio: not in enabled drivers build config 00:02:23.860 compress/isal: not in enabled drivers build config 00:02:23.860 compress/mlx5: not in enabled drivers build config 00:02:23.860 compress/nitrox: not in enabled drivers build config 00:02:23.860 compress/octeontx: not in enabled drivers build config 00:02:23.860 compress/zlib: not in enabled drivers build config 00:02:23.860 regex/*: missing internal dependency, "regexdev" 00:02:23.860 ml/*: missing internal dependency, "mldev" 00:02:23.860 vdpa/ifc: not in enabled drivers build config 00:02:23.860 vdpa/mlx5: not in enabled drivers build config 00:02:23.860 vdpa/nfp: not in enabled drivers build config 00:02:23.860 vdpa/sfc: not in enabled drivers build config 00:02:23.860 event/*: missing internal dependency, "eventdev" 00:02:23.860 baseband/*: missing internal dependency, "bbdev" 00:02:23.860 gpu/*: missing internal dependency, "gpudev" 00:02:23.860 00:02:23.860 00:02:23.860 Build targets in project: 85 00:02:23.860 00:02:23.860 DPDK 24.03.0 00:02:23.860 00:02:23.860 User defined options 00:02:23.860 buildtype : debug 00:02:23.860 default_library : shared 00:02:23.860 libdir : lib 00:02:23.860 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.860 b_sanitize : address 00:02:23.860 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.860 c_link_args : 00:02:23.860 cpu_instruction_set: native 00:02:23.860 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.860 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.860 enable_docs : false 00:02:23.860 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:23.860 enable_kmods : false 00:02:23.860 max_lcores : 128 00:02:23.860 tests : false 00:02:23.860 00:02:23.860 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.425 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:24.425 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.425 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.425 [3/268] Linking static target lib/librte_kvargs.a 00:02:24.425 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.425 [5/268] Linking static target lib/librte_log.a 00:02:24.425 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.998 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.998 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.998 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.998 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.998 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.256 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.256 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.256 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.256 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.513 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.513 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.513 [18/268] Linking static target lib/librte_telemetry.a 00:02:25.513 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.513 [20/268] Linking target lib/librte_log.so.24.1 00:02:25.772 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:25.772 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:26.030 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.030 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.030 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.030 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.030 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:26.288 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.288 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.288 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.288 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.288 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.546 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:26.546 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:26.804 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:26.804 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.804 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:27.062 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.062 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.062 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.062 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.062 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.062 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.320 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.320 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.578 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.578 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.836 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.836 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.836 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.094 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.094 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.094 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.094 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.352 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.352 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.352 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.610 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:28.869 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.869 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.869 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.869 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:28.869 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.127 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.127 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.127 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.127 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.385 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.385 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.385 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.643 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:29.643 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.643 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.643 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.643 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:29.901 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:29.901 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.901 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:29.901 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.159 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.159 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.159 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.159 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.417 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.417 [85/268] Linking static target lib/librte_ring.a 00:02:30.417 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.417 [87/268] Linking static target lib/librte_eal.a 00:02:30.681 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.681 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.938 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.938 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.938 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.938 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.938 [94/268] Linking static target lib/librte_mempool.a 00:02:31.503 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.503 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:31.503 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.503 [98/268] Linking static target lib/librte_rcu.a 00:02:31.503 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.761 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.761 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:31.761 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.761 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.761 [104/268] Linking static target lib/librte_mbuf.a 00:02:31.761 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.019 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.019 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.019 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:32.019 [109/268] Linking static target lib/librte_net.a 00:02:32.277 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.277 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:32.277 [112/268] Linking static target lib/librte_meter.a 00:02:32.535 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:32.535 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:32.535 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.535 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:32.793 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.793 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.793 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.359 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:33.359 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.359 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.925 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:33.925 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.925 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:33.925 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:33.925 [127/268] Linking static target lib/librte_pci.a 00:02:33.925 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:33.925 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.184 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.184 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:34.184 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.184 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.184 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.184 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.184 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.184 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.442 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.442 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.442 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.442 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.442 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.442 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.700 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.700 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.958 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:34.958 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.958 [148/268] Linking static target lib/librte_cmdline.a 00:02:34.958 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.215 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.215 [151/268] Linking static target lib/librte_ethdev.a 00:02:35.473 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.474 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.474 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.474 [155/268] Linking static target lib/librte_timer.a 00:02:35.474 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.731 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.990 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.990 [159/268] Linking static target lib/librte_compressdev.a 00:02:35.990 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.990 [161/268] Linking static target lib/librte_hash.a 00:02:35.990 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.990 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.248 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:36.248 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.506 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:36.506 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.765 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.765 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:36.765 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:36.765 [171/268] Linking static target lib/librte_dmadev.a 00:02:36.765 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:37.024 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:37.024 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.282 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.282 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.540 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:37.540 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.540 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:37.540 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:37.540 [181/268] Linking static target lib/librte_cryptodev.a 00:02:37.540 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:37.799 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.799 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.365 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:38.365 [186/268] Linking static target lib/librte_power.a 00:02:38.365 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.365 [188/268] Linking static target lib/librte_reorder.a 00:02:38.365 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:38.623 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:38.623 [191/268] Linking static target lib/librte_security.a 00:02:38.623 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:38.623 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:38.881 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:39.140 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.398 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.656 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.656 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.656 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:39.918 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:39.918 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.176 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.176 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.433 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.433 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:40.692 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.692 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.692 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.692 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.950 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.950 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.208 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.208 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.208 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.208 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.208 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.208 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.208 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:41.208 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:41.208 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.208 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.466 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.466 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.466 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.466 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:41.466 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.724 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.659 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.659 [229/268] Linking target lib/librte_eal.so.24.1 00:02:42.917 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:42.917 [231/268] Linking target lib/librte_meter.so.24.1 00:02:42.917 [232/268] Linking target lib/librte_pci.so.24.1 00:02:42.917 [233/268] Linking target lib/librte_timer.so.24.1 00:02:42.917 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:42.917 [235/268] Linking target lib/librte_ring.so.24.1 00:02:42.917 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:42.917 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:43.175 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:43.175 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:43.175 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:43.175 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:43.175 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:43.175 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:43.175 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:43.175 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:43.433 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:43.433 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:43.433 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:43.433 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:43.692 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:43.692 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:43.692 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:43.692 [253/268] Linking target lib/librte_net.so.24.1 00:02:43.692 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:43.692 [255/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.692 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:43.950 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:43.950 [258/268] Linking target lib/librte_hash.so.24.1 00:02:43.950 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:43.950 [260/268] Linking target lib/librte_ethdev.so.24.1 00:02:43.950 [261/268] Linking target lib/librte_security.so.24.1 00:02:43.950 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.950 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:43.950 [264/268] Linking target lib/librte_power.so.24.1 00:02:48.148 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.148 [266/268] Linking static target lib/librte_vhost.a 00:02:49.083 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.083 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:49.083 INFO: autodetecting backend as ninja 00:02:49.083 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.011 CC lib/ut/ut.o 00:03:11.011 CC lib/ut_mock/mock.o 00:03:11.011 CC lib/log/log.o 00:03:11.011 CC lib/log/log_flags.o 00:03:11.011 CC lib/log/log_deprecated.o 00:03:11.011 LIB libspdk_ut.a 00:03:11.011 SO libspdk_ut.so.2.0 00:03:11.011 LIB libspdk_ut_mock.a 00:03:11.011 LIB libspdk_log.a 00:03:11.011 SO libspdk_ut_mock.so.6.0 00:03:11.011 SO libspdk_log.so.7.1 00:03:11.011 SYMLINK libspdk_ut.so 00:03:11.011 SYMLINK libspdk_ut_mock.so 00:03:11.011 SYMLINK libspdk_log.so 00:03:11.011 CC lib/ioat/ioat.o 00:03:11.011 CXX lib/trace_parser/trace.o 00:03:11.011 CC lib/dma/dma.o 00:03:11.011 CC lib/util/base64.o 00:03:11.011 CC lib/util/bit_array.o 00:03:11.011 CC lib/util/cpuset.o 00:03:11.011 CC lib/util/crc16.o 00:03:11.011 CC lib/util/crc32.o 00:03:11.011 CC lib/util/crc32c.o 00:03:11.011 CC lib/util/crc32_ieee.o 00:03:11.011 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.011 CC lib/util/crc64.o 00:03:11.011 CC lib/util/dif.o 00:03:11.011 CC lib/util/fd.o 00:03:11.011 CC lib/vfio_user/host/vfio_user.o 00:03:11.011 LIB libspdk_dma.a 00:03:11.011 CC lib/util/fd_group.o 00:03:11.011 SO libspdk_dma.so.5.0 00:03:11.011 SYMLINK libspdk_dma.so 00:03:11.011 CC lib/util/file.o 00:03:11.011 CC lib/util/hexlify.o 00:03:11.011 CC lib/util/iov.o 00:03:11.011 CC lib/util/math.o 00:03:11.011 LIB libspdk_ioat.a 00:03:11.011 SO libspdk_ioat.so.7.0 00:03:11.011 SYMLINK libspdk_ioat.so 00:03:11.011 CC lib/util/net.o 00:03:11.011 CC lib/util/pipe.o 00:03:11.011 LIB libspdk_vfio_user.a 00:03:11.011 CC lib/util/strerror_tls.o 00:03:11.011 CC lib/util/string.o 00:03:11.011 SO libspdk_vfio_user.so.5.0 00:03:11.011 CC lib/util/uuid.o 00:03:11.011 CC lib/util/xor.o 00:03:11.011 SYMLINK libspdk_vfio_user.so 00:03:11.011 CC lib/util/zipf.o 00:03:11.011 CC lib/util/md5.o 00:03:11.578 LIB libspdk_util.a 00:03:11.578 SO libspdk_util.so.10.1 00:03:11.578 LIB libspdk_trace_parser.a 00:03:11.578 SO libspdk_trace_parser.so.6.0 00:03:11.578 SYMLINK libspdk_util.so 00:03:11.836 SYMLINK libspdk_trace_parser.so 00:03:11.836 CC lib/vmd/vmd.o 00:03:11.836 CC lib/vmd/led.o 00:03:11.836 CC lib/idxd/idxd.o 00:03:11.836 CC lib/json/json_parse.o 00:03:11.836 CC lib/idxd/idxd_user.o 00:03:11.836 CC lib/env_dpdk/env.o 00:03:11.836 CC lib/json/json_util.o 00:03:11.836 CC lib/json/json_write.o 00:03:11.836 CC lib/rdma_utils/rdma_utils.o 00:03:11.836 CC lib/conf/conf.o 00:03:12.094 CC lib/idxd/idxd_kernel.o 00:03:12.094 LIB libspdk_conf.a 00:03:12.094 CC lib/env_dpdk/memory.o 00:03:12.094 CC lib/env_dpdk/pci.o 00:03:12.094 CC lib/env_dpdk/init.o 00:03:12.094 SO libspdk_conf.so.6.0 00:03:12.352 LIB libspdk_json.a 00:03:12.352 SYMLINK libspdk_conf.so 00:03:12.352 LIB libspdk_rdma_utils.a 00:03:12.352 CC lib/env_dpdk/threads.o 00:03:12.352 SO libspdk_json.so.6.0 00:03:12.352 SO libspdk_rdma_utils.so.1.0 00:03:12.352 SYMLINK libspdk_json.so 00:03:12.352 SYMLINK libspdk_rdma_utils.so 00:03:12.352 CC lib/env_dpdk/pci_ioat.o 00:03:12.352 CC lib/env_dpdk/pci_virtio.o 00:03:12.352 CC lib/env_dpdk/pci_vmd.o 00:03:12.610 CC lib/env_dpdk/pci_idxd.o 00:03:12.610 CC lib/env_dpdk/pci_event.o 00:03:12.610 CC lib/jsonrpc/jsonrpc_server.o 00:03:12.610 CC lib/env_dpdk/sigbus_handler.o 00:03:12.610 CC lib/env_dpdk/pci_dpdk.o 00:03:12.610 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.610 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.610 LIB libspdk_idxd.a 00:03:12.868 SO libspdk_idxd.so.12.1 00:03:12.869 LIB libspdk_vmd.a 00:03:12.869 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:12.869 SO libspdk_vmd.so.6.0 00:03:12.869 CC lib/jsonrpc/jsonrpc_client.o 00:03:12.869 SYMLINK libspdk_idxd.so 00:03:12.869 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:12.869 SYMLINK libspdk_vmd.so 00:03:12.869 CC lib/rdma_provider/common.o 00:03:12.869 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:13.127 LIB libspdk_jsonrpc.a 00:03:13.127 SO libspdk_jsonrpc.so.6.0 00:03:13.127 LIB libspdk_rdma_provider.a 00:03:13.127 SO libspdk_rdma_provider.so.7.0 00:03:13.127 SYMLINK libspdk_jsonrpc.so 00:03:13.386 SYMLINK libspdk_rdma_provider.so 00:03:13.386 CC lib/rpc/rpc.o 00:03:13.643 LIB libspdk_rpc.a 00:03:13.902 SO libspdk_rpc.so.6.0 00:03:13.902 LIB libspdk_env_dpdk.a 00:03:13.902 SYMLINK libspdk_rpc.so 00:03:13.902 SO libspdk_env_dpdk.so.15.1 00:03:14.161 SYMLINK libspdk_env_dpdk.so 00:03:14.161 CC lib/keyring/keyring.o 00:03:14.161 CC lib/trace/trace.o 00:03:14.161 CC lib/keyring/keyring_rpc.o 00:03:14.161 CC lib/trace/trace_rpc.o 00:03:14.161 CC lib/trace/trace_flags.o 00:03:14.161 CC lib/notify/notify.o 00:03:14.161 CC lib/notify/notify_rpc.o 00:03:14.419 LIB libspdk_notify.a 00:03:14.420 SO libspdk_notify.so.6.0 00:03:14.420 SYMLINK libspdk_notify.so 00:03:14.420 LIB libspdk_keyring.a 00:03:14.420 SO libspdk_keyring.so.2.0 00:03:14.420 LIB libspdk_trace.a 00:03:14.420 SO libspdk_trace.so.11.0 00:03:14.678 SYMLINK libspdk_keyring.so 00:03:14.678 SYMLINK libspdk_trace.so 00:03:14.937 CC lib/thread/thread.o 00:03:14.937 CC lib/thread/iobuf.o 00:03:14.937 CC lib/sock/sock.o 00:03:14.937 CC lib/sock/sock_rpc.o 00:03:15.504 LIB libspdk_sock.a 00:03:15.504 SO libspdk_sock.so.10.0 00:03:15.762 SYMLINK libspdk_sock.so 00:03:16.021 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.021 CC lib/nvme/nvme_ctrlr.o 00:03:16.021 CC lib/nvme/nvme_fabric.o 00:03:16.021 CC lib/nvme/nvme_pcie_common.o 00:03:16.021 CC lib/nvme/nvme_ns_cmd.o 00:03:16.021 CC lib/nvme/nvme_qpair.o 00:03:16.021 CC lib/nvme/nvme_pcie.o 00:03:16.021 CC lib/nvme/nvme.o 00:03:16.021 CC lib/nvme/nvme_ns.o 00:03:16.971 CC lib/nvme/nvme_quirks.o 00:03:16.971 CC lib/nvme/nvme_transport.o 00:03:16.971 CC lib/nvme/nvme_discovery.o 00:03:16.972 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:16.972 LIB libspdk_thread.a 00:03:16.972 SO libspdk_thread.so.11.0 00:03:16.972 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.230 CC lib/nvme/nvme_tcp.o 00:03:17.230 SYMLINK libspdk_thread.so 00:03:17.230 CC lib/nvme/nvme_opal.o 00:03:17.230 CC lib/nvme/nvme_io_msg.o 00:03:17.230 CC lib/nvme/nvme_poll_group.o 00:03:17.489 CC lib/nvme/nvme_zns.o 00:03:17.489 CC lib/nvme/nvme_stubs.o 00:03:17.748 CC lib/nvme/nvme_auth.o 00:03:17.748 CC lib/accel/accel.o 00:03:17.748 CC lib/nvme/nvme_cuse.o 00:03:18.006 CC lib/accel/accel_rpc.o 00:03:18.006 CC lib/blob/blobstore.o 00:03:18.006 CC lib/blob/request.o 00:03:18.264 CC lib/blob/zeroes.o 00:03:18.264 CC lib/accel/accel_sw.o 00:03:18.522 CC lib/blob/blob_bs_dev.o 00:03:18.522 CC lib/init/json_config.o 00:03:18.522 CC lib/init/subsystem.o 00:03:18.780 CC lib/init/subsystem_rpc.o 00:03:18.780 CC lib/nvme/nvme_vfio_user.o 00:03:18.780 CC lib/init/rpc.o 00:03:18.780 CC lib/nvme/nvme_rdma.o 00:03:19.038 LIB libspdk_init.a 00:03:19.038 CC lib/virtio/virtio.o 00:03:19.038 CC lib/virtio/virtio_vhost_user.o 00:03:19.038 SO libspdk_init.so.6.0 00:03:19.038 CC lib/vfu_tgt/tgt_endpoint.o 00:03:19.038 SYMLINK libspdk_init.so 00:03:19.038 CC lib/vfu_tgt/tgt_rpc.o 00:03:19.038 CC lib/virtio/virtio_vfio_user.o 00:03:19.038 CC lib/fsdev/fsdev.o 00:03:19.296 LIB libspdk_accel.a 00:03:19.296 CC lib/fsdev/fsdev_io.o 00:03:19.296 SO libspdk_accel.so.16.0 00:03:19.296 CC lib/fsdev/fsdev_rpc.o 00:03:19.296 CC lib/virtio/virtio_pci.o 00:03:19.554 LIB libspdk_vfu_tgt.a 00:03:19.554 SYMLINK libspdk_accel.so 00:03:19.554 SO libspdk_vfu_tgt.so.3.0 00:03:19.554 SYMLINK libspdk_vfu_tgt.so 00:03:19.554 CC lib/event/app.o 00:03:19.554 CC lib/event/reactor.o 00:03:19.554 CC lib/event/log_rpc.o 00:03:19.554 CC lib/event/app_rpc.o 00:03:19.554 CC lib/bdev/bdev.o 00:03:19.813 LIB libspdk_virtio.a 00:03:19.813 CC lib/bdev/bdev_rpc.o 00:03:19.813 CC lib/event/scheduler_static.o 00:03:19.813 SO libspdk_virtio.so.7.0 00:03:19.813 SYMLINK libspdk_virtio.so 00:03:19.813 CC lib/bdev/bdev_zone.o 00:03:19.813 CC lib/bdev/part.o 00:03:20.071 LIB libspdk_fsdev.a 00:03:20.071 CC lib/bdev/scsi_nvme.o 00:03:20.071 SO libspdk_fsdev.so.2.0 00:03:20.071 SYMLINK libspdk_fsdev.so 00:03:20.329 LIB libspdk_event.a 00:03:20.329 SO libspdk_event.so.14.0 00:03:20.329 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:20.329 SYMLINK libspdk_event.so 00:03:20.897 LIB libspdk_nvme.a 00:03:21.155 SO libspdk_nvme.so.15.0 00:03:21.155 LIB libspdk_fuse_dispatcher.a 00:03:21.155 SO libspdk_fuse_dispatcher.so.1.0 00:03:21.415 SYMLINK libspdk_fuse_dispatcher.so 00:03:21.415 SYMLINK libspdk_nvme.so 00:03:22.793 LIB libspdk_blob.a 00:03:22.793 SO libspdk_blob.so.12.0 00:03:22.794 SYMLINK libspdk_blob.so 00:03:23.052 CC lib/blobfs/blobfs.o 00:03:23.052 CC lib/blobfs/tree.o 00:03:23.052 CC lib/lvol/lvol.o 00:03:23.620 LIB libspdk_bdev.a 00:03:23.620 SO libspdk_bdev.so.17.0 00:03:23.620 SYMLINK libspdk_bdev.so 00:03:23.878 CC lib/scsi/dev.o 00:03:23.878 CC lib/scsi/lun.o 00:03:23.878 CC lib/scsi/port.o 00:03:23.878 CC lib/nvmf/ctrlr_discovery.o 00:03:23.878 CC lib/nvmf/ctrlr.o 00:03:23.878 CC lib/ftl/ftl_core.o 00:03:23.878 CC lib/ublk/ublk.o 00:03:23.878 CC lib/nbd/nbd.o 00:03:24.137 LIB libspdk_blobfs.a 00:03:24.137 SO libspdk_blobfs.so.11.0 00:03:24.137 CC lib/ublk/ublk_rpc.o 00:03:24.137 SYMLINK libspdk_blobfs.so 00:03:24.137 CC lib/nvmf/ctrlr_bdev.o 00:03:24.137 CC lib/nvmf/subsystem.o 00:03:24.395 CC lib/scsi/scsi.o 00:03:24.395 LIB libspdk_lvol.a 00:03:24.395 CC lib/scsi/scsi_bdev.o 00:03:24.395 SO libspdk_lvol.so.11.0 00:03:24.395 CC lib/ftl/ftl_init.o 00:03:24.395 SYMLINK libspdk_lvol.so 00:03:24.395 CC lib/ftl/ftl_layout.o 00:03:24.395 CC lib/ftl/ftl_debug.o 00:03:24.395 CC lib/nbd/nbd_rpc.o 00:03:24.653 CC lib/nvmf/nvmf.o 00:03:24.653 CC lib/ftl/ftl_io.o 00:03:24.653 LIB libspdk_nbd.a 00:03:24.653 SO libspdk_nbd.so.7.0 00:03:24.911 CC lib/nvmf/nvmf_rpc.o 00:03:24.911 LIB libspdk_ublk.a 00:03:24.911 SYMLINK libspdk_nbd.so 00:03:24.911 CC lib/nvmf/transport.o 00:03:24.911 SO libspdk_ublk.so.3.0 00:03:24.911 CC lib/ftl/ftl_sb.o 00:03:24.911 SYMLINK libspdk_ublk.so 00:03:24.911 CC lib/nvmf/tcp.o 00:03:24.911 CC lib/nvmf/stubs.o 00:03:24.911 CC lib/scsi/scsi_pr.o 00:03:25.168 CC lib/ftl/ftl_l2p.o 00:03:25.168 CC lib/nvmf/mdns_server.o 00:03:25.426 CC lib/ftl/ftl_l2p_flat.o 00:03:25.426 CC lib/scsi/scsi_rpc.o 00:03:25.426 CC lib/nvmf/vfio_user.o 00:03:25.684 CC lib/ftl/ftl_nv_cache.o 00:03:25.684 CC lib/scsi/task.o 00:03:25.684 CC lib/nvmf/rdma.o 00:03:25.684 CC lib/nvmf/auth.o 00:03:25.684 CC lib/ftl/ftl_band.o 00:03:25.942 LIB libspdk_scsi.a 00:03:25.942 CC lib/ftl/ftl_band_ops.o 00:03:25.942 CC lib/ftl/ftl_writer.o 00:03:25.942 SO libspdk_scsi.so.9.0 00:03:26.200 SYMLINK libspdk_scsi.so 00:03:26.200 CC lib/ftl/ftl_rq.o 00:03:26.200 CC lib/ftl/ftl_reloc.o 00:03:26.200 CC lib/ftl/ftl_l2p_cache.o 00:03:26.458 CC lib/ftl/ftl_p2l.o 00:03:26.458 CC lib/ftl/ftl_p2l_log.o 00:03:26.458 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.716 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.716 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:27.047 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.047 CC lib/iscsi/conn.o 00:03:27.047 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.047 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.047 CC lib/vhost/vhost.o 00:03:27.047 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:27.047 CC lib/iscsi/init_grp.o 00:03:27.047 CC lib/iscsi/iscsi.o 00:03:27.047 CC lib/iscsi/param.o 00:03:27.306 CC lib/vhost/vhost_rpc.o 00:03:27.306 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.306 CC lib/iscsi/portal_grp.o 00:03:27.306 CC lib/iscsi/tgt_node.o 00:03:27.564 CC lib/iscsi/iscsi_subsystem.o 00:03:27.564 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.564 CC lib/iscsi/iscsi_rpc.o 00:03:27.564 CC lib/iscsi/task.o 00:03:27.822 CC lib/vhost/vhost_scsi.o 00:03:27.822 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.822 CC lib/vhost/vhost_blk.o 00:03:27.822 CC lib/vhost/rte_vhost_user.o 00:03:28.080 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.080 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.080 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.080 CC lib/ftl/utils/ftl_conf.o 00:03:28.080 CC lib/ftl/utils/ftl_md.o 00:03:28.338 CC lib/ftl/utils/ftl_mempool.o 00:03:28.338 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.338 CC lib/ftl/utils/ftl_property.o 00:03:28.595 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.595 LIB libspdk_nvmf.a 00:03:28.595 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.595 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.595 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.595 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.853 SO libspdk_nvmf.so.20.0 00:03:28.853 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.853 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.853 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.853 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.853 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.853 LIB libspdk_iscsi.a 00:03:28.853 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.112 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:29.112 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:29.112 CC lib/ftl/base/ftl_base_dev.o 00:03:29.112 SYMLINK libspdk_nvmf.so 00:03:29.112 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.112 SO libspdk_iscsi.so.8.0 00:03:29.112 CC lib/ftl/ftl_trace.o 00:03:29.112 LIB libspdk_vhost.a 00:03:29.112 SO libspdk_vhost.so.8.0 00:03:29.370 SYMLINK libspdk_iscsi.so 00:03:29.370 SYMLINK libspdk_vhost.so 00:03:29.370 LIB libspdk_ftl.a 00:03:29.629 SO libspdk_ftl.so.9.0 00:03:29.887 SYMLINK libspdk_ftl.so 00:03:30.146 CC module/env_dpdk/env_dpdk_rpc.o 00:03:30.146 CC module/vfu_device/vfu_virtio.o 00:03:30.404 CC module/sock/uring/uring.o 00:03:30.404 CC module/blob/bdev/blob_bdev.o 00:03:30.404 CC module/sock/posix/posix.o 00:03:30.404 CC module/accel/ioat/accel_ioat.o 00:03:30.404 CC module/fsdev/aio/fsdev_aio.o 00:03:30.404 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:30.404 CC module/accel/error/accel_error.o 00:03:30.404 CC module/keyring/file/keyring.o 00:03:30.404 LIB libspdk_env_dpdk_rpc.a 00:03:30.404 SO libspdk_env_dpdk_rpc.so.6.0 00:03:30.404 SYMLINK libspdk_env_dpdk_rpc.so 00:03:30.404 CC module/keyring/file/keyring_rpc.o 00:03:30.662 CC module/vfu_device/vfu_virtio_blk.o 00:03:30.662 CC module/accel/ioat/accel_ioat_rpc.o 00:03:30.662 CC module/accel/error/accel_error_rpc.o 00:03:30.662 LIB libspdk_scheduler_dynamic.a 00:03:30.662 SO libspdk_scheduler_dynamic.so.4.0 00:03:30.662 LIB libspdk_keyring_file.a 00:03:30.662 SO libspdk_keyring_file.so.2.0 00:03:30.662 LIB libspdk_blob_bdev.a 00:03:30.662 SYMLINK libspdk_scheduler_dynamic.so 00:03:30.662 LIB libspdk_accel_ioat.a 00:03:30.662 SO libspdk_blob_bdev.so.12.0 00:03:30.662 SYMLINK libspdk_keyring_file.so 00:03:30.662 LIB libspdk_accel_error.a 00:03:30.922 SO libspdk_accel_ioat.so.6.0 00:03:30.922 SO libspdk_accel_error.so.2.0 00:03:30.922 SYMLINK libspdk_blob_bdev.so 00:03:30.922 SYMLINK libspdk_accel_ioat.so 00:03:30.922 SYMLINK libspdk_accel_error.so 00:03:30.922 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.922 CC module/keyring/linux/keyring.o 00:03:31.180 CC module/accel/dsa/accel_dsa.o 00:03:31.180 CC module/vfu_device/vfu_virtio_scsi.o 00:03:31.180 CC module/accel/iaa/accel_iaa.o 00:03:31.180 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.180 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.180 CC module/keyring/linux/keyring_rpc.o 00:03:31.180 CC module/bdev/delay/vbdev_delay.o 00:03:31.180 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.180 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.180 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:31.180 LIB libspdk_sock_posix.a 00:03:31.438 LIB libspdk_keyring_linux.a 00:03:31.438 LIB libspdk_sock_uring.a 00:03:31.438 SO libspdk_sock_posix.so.6.0 00:03:31.438 SO libspdk_keyring_linux.so.1.0 00:03:31.438 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.438 SO libspdk_sock_uring.so.5.0 00:03:31.438 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.438 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.438 SYMLINK libspdk_keyring_linux.so 00:03:31.438 CC module/fsdev/aio/linux_aio_mgr.o 00:03:31.438 SYMLINK libspdk_sock_posix.so 00:03:31.438 SYMLINK libspdk_sock_uring.so 00:03:31.438 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.438 CC module/vfu_device/vfu_virtio_rpc.o 00:03:31.438 CC module/vfu_device/vfu_virtio_fs.o 00:03:31.438 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.438 LIB libspdk_accel_iaa.a 00:03:31.438 SO libspdk_accel_iaa.so.3.0 00:03:31.696 LIB libspdk_blobfs_bdev.a 00:03:31.696 LIB libspdk_accel_dsa.a 00:03:31.696 LIB libspdk_scheduler_gscheduler.a 00:03:31.696 SO libspdk_blobfs_bdev.so.6.0 00:03:31.696 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.696 SO libspdk_accel_dsa.so.5.0 00:03:31.696 SYMLINK libspdk_accel_iaa.so 00:03:31.696 LIB libspdk_bdev_delay.a 00:03:31.696 LIB libspdk_fsdev_aio.a 00:03:31.696 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.696 SYMLINK libspdk_accel_dsa.so 00:03:31.696 SO libspdk_bdev_delay.so.6.0 00:03:31.696 SYMLINK libspdk_blobfs_bdev.so 00:03:31.696 SO libspdk_fsdev_aio.so.1.0 00:03:31.696 LIB libspdk_vfu_device.a 00:03:31.696 CC module/bdev/error/vbdev_error.o 00:03:31.696 SO libspdk_vfu_device.so.3.0 00:03:31.696 SYMLINK libspdk_bdev_delay.so 00:03:31.696 SYMLINK libspdk_fsdev_aio.so 00:03:31.696 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.696 CC module/bdev/gpt/gpt.o 00:03:31.954 SYMLINK libspdk_vfu_device.so 00:03:31.954 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.954 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.954 CC module/bdev/null/bdev_null.o 00:03:31.954 CC module/bdev/malloc/bdev_malloc.o 00:03:31.954 CC module/bdev/nvme/bdev_nvme.o 00:03:31.954 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.954 CC module/bdev/raid/bdev_raid.o 00:03:31.954 CC module/bdev/raid/bdev_raid_rpc.o 00:03:31.954 CC module/bdev/raid/bdev_raid_sb.o 00:03:31.954 LIB libspdk_bdev_error.a 00:03:32.212 SO libspdk_bdev_error.so.6.0 00:03:32.212 SYMLINK libspdk_bdev_error.so 00:03:32.212 CC module/bdev/null/bdev_null_rpc.o 00:03:32.212 LIB libspdk_bdev_gpt.a 00:03:32.212 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.212 SO libspdk_bdev_gpt.so.6.0 00:03:32.212 CC module/bdev/raid/raid0.o 00:03:32.212 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.212 SYMLINK libspdk_bdev_gpt.so 00:03:32.470 LIB libspdk_bdev_null.a 00:03:32.470 LIB libspdk_bdev_malloc.a 00:03:32.470 SO libspdk_bdev_null.so.6.0 00:03:32.470 SO libspdk_bdev_malloc.so.6.0 00:03:32.470 LIB libspdk_bdev_passthru.a 00:03:32.470 SYMLINK libspdk_bdev_null.so 00:03:32.470 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:32.470 SO libspdk_bdev_passthru.so.6.0 00:03:32.470 SYMLINK libspdk_bdev_malloc.so 00:03:32.470 CC module/bdev/split/vbdev_split.o 00:03:32.470 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.470 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.728 SYMLINK libspdk_bdev_passthru.so 00:03:32.728 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.728 CC module/bdev/uring/bdev_uring.o 00:03:32.728 CC module/bdev/aio/bdev_aio.o 00:03:32.728 CC module/bdev/ftl/bdev_ftl.o 00:03:32.728 CC module/bdev/nvme/nvme_rpc.o 00:03:32.728 LIB libspdk_bdev_split.a 00:03:32.728 SO libspdk_bdev_split.so.6.0 00:03:32.986 SYMLINK libspdk_bdev_split.so 00:03:32.986 CC module/bdev/raid/raid1.o 00:03:32.986 LIB libspdk_bdev_lvol.a 00:03:32.986 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.986 SO libspdk_bdev_lvol.so.6.0 00:03:32.986 CC module/bdev/raid/concat.o 00:03:32.986 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.986 CC module/bdev/uring/bdev_uring_rpc.o 00:03:32.986 SYMLINK libspdk_bdev_lvol.so 00:03:33.244 CC module/bdev/aio/bdev_aio_rpc.o 00:03:33.244 LIB libspdk_bdev_zone_block.a 00:03:33.244 SO libspdk_bdev_zone_block.so.6.0 00:03:33.244 CC module/bdev/nvme/bdev_mdns_client.o 00:03:33.244 LIB libspdk_bdev_uring.a 00:03:33.244 SYMLINK libspdk_bdev_zone_block.so 00:03:33.244 SO libspdk_bdev_uring.so.6.0 00:03:33.244 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.244 LIB libspdk_bdev_ftl.a 00:03:33.244 CC module/bdev/nvme/vbdev_opal.o 00:03:33.244 LIB libspdk_bdev_raid.a 00:03:33.244 LIB libspdk_bdev_aio.a 00:03:33.244 SO libspdk_bdev_ftl.so.6.0 00:03:33.244 SYMLINK libspdk_bdev_uring.so 00:03:33.244 SO libspdk_bdev_aio.so.6.0 00:03:33.244 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.502 SO libspdk_bdev_raid.so.6.0 00:03:33.502 SYMLINK libspdk_bdev_ftl.so 00:03:33.502 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:33.502 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.502 SYMLINK libspdk_bdev_aio.so 00:03:33.502 SYMLINK libspdk_bdev_raid.so 00:03:33.502 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.502 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.502 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.760 LIB libspdk_bdev_iscsi.a 00:03:33.760 SO libspdk_bdev_iscsi.so.6.0 00:03:33.760 SYMLINK libspdk_bdev_iscsi.so 00:03:34.019 LIB libspdk_bdev_virtio.a 00:03:34.277 SO libspdk_bdev_virtio.so.6.0 00:03:34.277 SYMLINK libspdk_bdev_virtio.so 00:03:35.213 LIB libspdk_bdev_nvme.a 00:03:35.213 SO libspdk_bdev_nvme.so.7.1 00:03:35.471 SYMLINK libspdk_bdev_nvme.so 00:03:36.039 CC module/event/subsystems/iobuf/iobuf.o 00:03:36.039 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.039 CC module/event/subsystems/sock/sock.o 00:03:36.039 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.039 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:36.039 CC module/event/subsystems/fsdev/fsdev.o 00:03:36.039 CC module/event/subsystems/vmd/vmd.o 00:03:36.039 CC module/event/subsystems/keyring/keyring.o 00:03:36.039 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.039 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:36.039 LIB libspdk_event_scheduler.a 00:03:36.039 LIB libspdk_event_sock.a 00:03:36.039 LIB libspdk_event_keyring.a 00:03:36.039 LIB libspdk_event_vhost_blk.a 00:03:36.039 LIB libspdk_event_fsdev.a 00:03:36.039 LIB libspdk_event_vmd.a 00:03:36.039 LIB libspdk_event_vfu_tgt.a 00:03:36.039 SO libspdk_event_scheduler.so.4.0 00:03:36.039 SO libspdk_event_vhost_blk.so.3.0 00:03:36.039 SO libspdk_event_sock.so.5.0 00:03:36.039 SO libspdk_event_keyring.so.1.0 00:03:36.039 SO libspdk_event_fsdev.so.1.0 00:03:36.039 LIB libspdk_event_iobuf.a 00:03:36.039 SO libspdk_event_vfu_tgt.so.3.0 00:03:36.039 SO libspdk_event_vmd.so.6.0 00:03:36.039 SO libspdk_event_iobuf.so.3.0 00:03:36.039 SYMLINK libspdk_event_scheduler.so 00:03:36.039 SYMLINK libspdk_event_vhost_blk.so 00:03:36.039 SYMLINK libspdk_event_keyring.so 00:03:36.039 SYMLINK libspdk_event_fsdev.so 00:03:36.039 SYMLINK libspdk_event_sock.so 00:03:36.039 SYMLINK libspdk_event_vfu_tgt.so 00:03:36.039 SYMLINK libspdk_event_vmd.so 00:03:36.297 SYMLINK libspdk_event_iobuf.so 00:03:36.556 CC module/event/subsystems/accel/accel.o 00:03:36.556 LIB libspdk_event_accel.a 00:03:36.556 SO libspdk_event_accel.so.6.0 00:03:36.815 SYMLINK libspdk_event_accel.so 00:03:37.073 CC module/event/subsystems/bdev/bdev.o 00:03:37.332 LIB libspdk_event_bdev.a 00:03:37.332 SO libspdk_event_bdev.so.6.0 00:03:37.332 SYMLINK libspdk_event_bdev.so 00:03:37.590 CC module/event/subsystems/ublk/ublk.o 00:03:37.590 CC module/event/subsystems/scsi/scsi.o 00:03:37.590 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.590 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.590 CC module/event/subsystems/nbd/nbd.o 00:03:37.849 LIB libspdk_event_ublk.a 00:03:37.849 LIB libspdk_event_nbd.a 00:03:37.849 SO libspdk_event_ublk.so.3.0 00:03:37.849 SO libspdk_event_nbd.so.6.0 00:03:37.849 LIB libspdk_event_scsi.a 00:03:37.849 SO libspdk_event_scsi.so.6.0 00:03:37.849 SYMLINK libspdk_event_ublk.so 00:03:37.849 SYMLINK libspdk_event_nbd.so 00:03:37.849 LIB libspdk_event_nvmf.a 00:03:37.849 SYMLINK libspdk_event_scsi.so 00:03:37.849 SO libspdk_event_nvmf.so.6.0 00:03:38.107 SYMLINK libspdk_event_nvmf.so 00:03:38.107 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.107 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.367 LIB libspdk_event_vhost_scsi.a 00:03:38.367 SO libspdk_event_vhost_scsi.so.3.0 00:03:38.367 LIB libspdk_event_iscsi.a 00:03:38.367 SO libspdk_event_iscsi.so.6.0 00:03:38.367 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.367 SYMLINK libspdk_event_iscsi.so 00:03:38.626 SO libspdk.so.6.0 00:03:38.626 SYMLINK libspdk.so 00:03:38.884 CC app/spdk_nvme_perf/perf.o 00:03:38.884 CC app/trace_record/trace_record.o 00:03:38.884 CXX app/trace/trace.o 00:03:38.884 CC app/spdk_lspci/spdk_lspci.o 00:03:38.884 CC app/nvmf_tgt/nvmf_main.o 00:03:38.884 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.143 CC examples/util/zipf/zipf.o 00:03:39.143 CC examples/ioat/perf/perf.o 00:03:39.143 CC app/spdk_tgt/spdk_tgt.o 00:03:39.143 CC test/thread/poller_perf/poller_perf.o 00:03:39.143 LINK spdk_lspci 00:03:39.143 LINK nvmf_tgt 00:03:39.143 LINK poller_perf 00:03:39.402 LINK iscsi_tgt 00:03:39.402 LINK zipf 00:03:39.402 LINK spdk_trace_record 00:03:39.402 LINK spdk_tgt 00:03:39.402 LINK ioat_perf 00:03:39.402 CC app/spdk_nvme_identify/identify.o 00:03:39.402 LINK spdk_trace 00:03:39.661 CC app/spdk_top/spdk_top.o 00:03:39.661 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.661 CC examples/ioat/verify/verify.o 00:03:39.661 CC app/spdk_dd/spdk_dd.o 00:03:39.661 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.661 CC test/dma/test_dma/test_dma.o 00:03:39.919 CC app/fio/nvme/fio_plugin.o 00:03:39.919 LINK spdk_nvme_discover 00:03:39.919 LINK interrupt_tgt 00:03:39.919 CC test/app/bdev_svc/bdev_svc.o 00:03:39.919 LINK verify 00:03:40.178 CC app/fio/bdev/fio_plugin.o 00:03:40.178 LINK bdev_svc 00:03:40.178 LINK spdk_nvme_perf 00:03:40.178 LINK spdk_dd 00:03:40.178 CC app/vhost/vhost.o 00:03:40.178 CC examples/thread/thread/thread_ex.o 00:03:40.437 LINK test_dma 00:03:40.437 LINK vhost 00:03:40.437 CC test/app/histogram_perf/histogram_perf.o 00:03:40.437 LINK spdk_nvme 00:03:40.437 CC test/app/jsoncat/jsoncat.o 00:03:40.437 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.437 LINK spdk_nvme_identify 00:03:40.696 LINK thread 00:03:40.696 LINK histogram_perf 00:03:40.696 LINK jsoncat 00:03:40.696 TEST_HEADER include/spdk/accel.h 00:03:40.697 TEST_HEADER include/spdk/accel_module.h 00:03:40.697 TEST_HEADER include/spdk/assert.h 00:03:40.697 TEST_HEADER include/spdk/barrier.h 00:03:40.697 TEST_HEADER include/spdk/base64.h 00:03:40.697 TEST_HEADER include/spdk/bdev.h 00:03:40.697 TEST_HEADER include/spdk/bdev_module.h 00:03:40.697 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.697 TEST_HEADER include/spdk/bit_array.h 00:03:40.697 TEST_HEADER include/spdk/bit_pool.h 00:03:40.697 LINK spdk_top 00:03:40.697 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.697 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.697 TEST_HEADER include/spdk/blobfs.h 00:03:40.697 TEST_HEADER include/spdk/blob.h 00:03:40.697 TEST_HEADER include/spdk/conf.h 00:03:40.697 TEST_HEADER include/spdk/config.h 00:03:40.697 TEST_HEADER include/spdk/cpuset.h 00:03:40.697 TEST_HEADER include/spdk/crc16.h 00:03:40.697 TEST_HEADER include/spdk/crc32.h 00:03:40.697 TEST_HEADER include/spdk/crc64.h 00:03:40.697 LINK spdk_bdev 00:03:40.697 TEST_HEADER include/spdk/dif.h 00:03:40.697 TEST_HEADER include/spdk/dma.h 00:03:40.697 TEST_HEADER include/spdk/endian.h 00:03:40.697 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.697 TEST_HEADER include/spdk/env.h 00:03:40.697 TEST_HEADER include/spdk/event.h 00:03:40.697 TEST_HEADER include/spdk/fd_group.h 00:03:40.697 TEST_HEADER include/spdk/fd.h 00:03:40.697 TEST_HEADER include/spdk/file.h 00:03:40.697 TEST_HEADER include/spdk/fsdev.h 00:03:40.697 TEST_HEADER include/spdk/fsdev_module.h 00:03:40.697 TEST_HEADER include/spdk/ftl.h 00:03:40.697 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.697 TEST_HEADER include/spdk/hexlify.h 00:03:40.697 TEST_HEADER include/spdk/histogram_data.h 00:03:40.697 TEST_HEADER include/spdk/idxd.h 00:03:40.697 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.697 TEST_HEADER include/spdk/init.h 00:03:40.697 TEST_HEADER include/spdk/ioat.h 00:03:40.697 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.697 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.697 TEST_HEADER include/spdk/json.h 00:03:40.697 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.697 TEST_HEADER include/spdk/keyring.h 00:03:40.697 TEST_HEADER include/spdk/keyring_module.h 00:03:40.697 TEST_HEADER include/spdk/likely.h 00:03:40.697 TEST_HEADER include/spdk/log.h 00:03:40.697 TEST_HEADER include/spdk/lvol.h 00:03:40.697 TEST_HEADER include/spdk/md5.h 00:03:40.697 TEST_HEADER include/spdk/memory.h 00:03:40.697 TEST_HEADER include/spdk/mmio.h 00:03:40.697 TEST_HEADER include/spdk/nbd.h 00:03:40.697 TEST_HEADER include/spdk/net.h 00:03:40.697 TEST_HEADER include/spdk/notify.h 00:03:40.697 TEST_HEADER include/spdk/nvme.h 00:03:40.697 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.697 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.697 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.697 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.697 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.697 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.697 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.697 TEST_HEADER include/spdk/nvmf.h 00:03:40.956 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.956 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.956 TEST_HEADER include/spdk/opal.h 00:03:40.956 TEST_HEADER include/spdk/opal_spec.h 00:03:40.956 TEST_HEADER include/spdk/pci_ids.h 00:03:40.956 TEST_HEADER include/spdk/pipe.h 00:03:40.956 TEST_HEADER include/spdk/queue.h 00:03:40.956 TEST_HEADER include/spdk/reduce.h 00:03:40.956 TEST_HEADER include/spdk/rpc.h 00:03:40.956 TEST_HEADER include/spdk/scheduler.h 00:03:40.956 TEST_HEADER include/spdk/scsi.h 00:03:40.956 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.956 TEST_HEADER include/spdk/sock.h 00:03:40.956 TEST_HEADER include/spdk/stdinc.h 00:03:40.956 TEST_HEADER include/spdk/string.h 00:03:40.956 TEST_HEADER include/spdk/thread.h 00:03:40.956 TEST_HEADER include/spdk/trace.h 00:03:40.956 TEST_HEADER include/spdk/trace_parser.h 00:03:40.956 TEST_HEADER include/spdk/tree.h 00:03:40.956 CC examples/sock/hello_world/hello_sock.o 00:03:40.956 TEST_HEADER include/spdk/ublk.h 00:03:40.956 TEST_HEADER include/spdk/util.h 00:03:40.956 TEST_HEADER include/spdk/uuid.h 00:03:40.956 TEST_HEADER include/spdk/version.h 00:03:40.956 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.956 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.956 TEST_HEADER include/spdk/vhost.h 00:03:40.956 TEST_HEADER include/spdk/vmd.h 00:03:40.956 TEST_HEADER include/spdk/xor.h 00:03:40.956 TEST_HEADER include/spdk/zipf.h 00:03:40.956 CXX test/cpp_headers/accel.o 00:03:40.956 CC test/event/event_perf/event_perf.o 00:03:40.956 CC test/app/stub/stub.o 00:03:40.956 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.956 CC test/env/vtophys/vtophys.o 00:03:40.956 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.956 CC test/env/memory/memory_ut.o 00:03:40.956 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.956 LINK nvme_fuzz 00:03:40.956 LINK event_perf 00:03:41.235 CXX test/cpp_headers/accel_module.o 00:03:41.235 LINK vtophys 00:03:41.235 LINK stub 00:03:41.235 LINK env_dpdk_post_init 00:03:41.235 LINK lsvmd 00:03:41.235 LINK hello_sock 00:03:41.235 CXX test/cpp_headers/assert.o 00:03:41.235 CXX test/cpp_headers/barrier.o 00:03:41.501 CC test/event/reactor/reactor.o 00:03:41.501 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:41.501 CC test/env/pci/pci_ut.o 00:03:41.501 CC examples/vmd/led/led.o 00:03:41.501 CXX test/cpp_headers/base64.o 00:03:41.501 CC test/rpc_client/rpc_client_test.o 00:03:41.501 LINK reactor 00:03:41.501 CC test/nvme/aer/aer.o 00:03:41.501 LINK mem_callbacks 00:03:41.501 CC test/nvme/reset/reset.o 00:03:41.759 LINK led 00:03:41.759 CXX test/cpp_headers/bdev.o 00:03:41.759 LINK rpc_client_test 00:03:41.759 CC test/event/reactor_perf/reactor_perf.o 00:03:41.759 CC test/nvme/sgl/sgl.o 00:03:42.018 CXX test/cpp_headers/bdev_module.o 00:03:42.018 LINK aer 00:03:42.018 CXX test/cpp_headers/bdev_zone.o 00:03:42.018 LINK pci_ut 00:03:42.018 LINK reset 00:03:42.018 LINK reactor_perf 00:03:42.018 CC examples/idxd/perf/perf.o 00:03:42.018 CXX test/cpp_headers/bit_array.o 00:03:42.276 LINK sgl 00:03:42.276 CXX test/cpp_headers/bit_pool.o 00:03:42.276 CC test/event/app_repeat/app_repeat.o 00:03:42.276 CXX test/cpp_headers/blob_bdev.o 00:03:42.276 CC test/blobfs/mkfs/mkfs.o 00:03:42.276 CC test/accel/dif/dif.o 00:03:42.276 LINK memory_ut 00:03:42.535 CC test/lvol/esnap/esnap.o 00:03:42.535 LINK idxd_perf 00:03:42.535 LINK app_repeat 00:03:42.535 CC test/nvme/e2edp/nvme_dp.o 00:03:42.535 CC test/nvme/overhead/overhead.o 00:03:42.535 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.535 LINK mkfs 00:03:42.794 CC test/nvme/err_injection/err_injection.o 00:03:42.794 CXX test/cpp_headers/blobfs.o 00:03:42.794 CC test/event/scheduler/scheduler.o 00:03:42.794 LINK nvme_dp 00:03:42.794 CXX test/cpp_headers/blob.o 00:03:42.794 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:42.794 LINK overhead 00:03:42.794 LINK err_injection 00:03:43.052 CXX test/cpp_headers/conf.o 00:03:43.052 LINK scheduler 00:03:43.052 CC test/nvme/startup/startup.o 00:03:43.052 CXX test/cpp_headers/config.o 00:03:43.052 CXX test/cpp_headers/cpuset.o 00:03:43.052 CC test/nvme/reserve/reserve.o 00:03:43.052 LINK hello_fsdev 00:03:43.311 CC examples/accel/perf/accel_perf.o 00:03:43.311 LINK startup 00:03:43.311 CC examples/blob/hello_world/hello_blob.o 00:03:43.311 LINK dif 00:03:43.311 CXX test/cpp_headers/crc16.o 00:03:43.311 CXX test/cpp_headers/crc32.o 00:03:43.311 CC examples/blob/cli/blobcli.o 00:03:43.311 LINK reserve 00:03:43.570 CC test/nvme/simple_copy/simple_copy.o 00:03:43.570 LINK hello_blob 00:03:43.570 CXX test/cpp_headers/crc64.o 00:03:43.570 LINK iscsi_fuzz 00:03:43.570 CC test/nvme/connect_stress/connect_stress.o 00:03:43.570 CC test/nvme/boot_partition/boot_partition.o 00:03:43.828 CXX test/cpp_headers/dif.o 00:03:43.828 LINK simple_copy 00:03:43.828 CC examples/nvme/hello_world/hello_world.o 00:03:43.828 LINK accel_perf 00:03:43.828 LINK connect_stress 00:03:43.828 LINK boot_partition 00:03:43.828 CC examples/nvme/reconnect/reconnect.o 00:03:43.828 CXX test/cpp_headers/dma.o 00:03:43.828 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.087 LINK blobcli 00:03:44.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.087 CC test/nvme/compliance/nvme_compliance.o 00:03:44.087 LINK hello_world 00:03:44.087 CC test/nvme/fused_ordering/fused_ordering.o 00:03:44.087 CXX test/cpp_headers/endian.o 00:03:44.345 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.345 LINK reconnect 00:03:44.345 CC examples/bdev/hello_world/hello_bdev.o 00:03:44.345 CXX test/cpp_headers/env_dpdk.o 00:03:44.345 LINK fused_ordering 00:03:44.346 CC examples/bdev/bdevperf/bdevperf.o 00:03:44.604 LINK doorbell_aers 00:03:44.604 LINK nvme_compliance 00:03:44.604 CXX test/cpp_headers/env.o 00:03:44.604 LINK vhost_fuzz 00:03:44.604 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:44.604 CC test/bdev/bdevio/bdevio.o 00:03:44.604 LINK hello_bdev 00:03:44.604 CC test/nvme/fdp/fdp.o 00:03:44.604 CXX test/cpp_headers/event.o 00:03:44.604 CXX test/cpp_headers/fd_group.o 00:03:44.862 CC examples/nvme/arbitration/arbitration.o 00:03:44.862 CC examples/nvme/hotplug/hotplug.o 00:03:44.862 CC test/nvme/cuse/cuse.o 00:03:44.862 CXX test/cpp_headers/fd.o 00:03:45.120 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.120 LINK bdevio 00:03:45.120 LINK hotplug 00:03:45.120 LINK fdp 00:03:45.120 CXX test/cpp_headers/file.o 00:03:45.120 LINK arbitration 00:03:45.120 LINK cmb_copy 00:03:45.120 LINK nvme_manage 00:03:45.378 CXX test/cpp_headers/fsdev.o 00:03:45.378 CXX test/cpp_headers/fsdev_module.o 00:03:45.378 CC examples/nvme/abort/abort.o 00:03:45.378 CXX test/cpp_headers/ftl.o 00:03:45.378 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:45.378 CXX test/cpp_headers/gpt_spec.o 00:03:45.378 CXX test/cpp_headers/hexlify.o 00:03:45.378 CXX test/cpp_headers/histogram_data.o 00:03:45.378 CXX test/cpp_headers/idxd.o 00:03:45.378 LINK bdevperf 00:03:45.637 LINK pmr_persistence 00:03:45.637 CXX test/cpp_headers/idxd_spec.o 00:03:45.637 CXX test/cpp_headers/init.o 00:03:45.637 CXX test/cpp_headers/ioat.o 00:03:45.637 CXX test/cpp_headers/ioat_spec.o 00:03:45.637 CXX test/cpp_headers/iscsi_spec.o 00:03:45.637 CXX test/cpp_headers/json.o 00:03:45.637 CXX test/cpp_headers/jsonrpc.o 00:03:45.637 CXX test/cpp_headers/keyring.o 00:03:45.895 CXX test/cpp_headers/keyring_module.o 00:03:45.895 CXX test/cpp_headers/likely.o 00:03:45.895 LINK abort 00:03:45.895 CXX test/cpp_headers/log.o 00:03:45.895 CXX test/cpp_headers/lvol.o 00:03:45.895 CXX test/cpp_headers/md5.o 00:03:45.895 CXX test/cpp_headers/memory.o 00:03:45.896 CXX test/cpp_headers/mmio.o 00:03:45.896 CXX test/cpp_headers/nbd.o 00:03:45.896 CXX test/cpp_headers/net.o 00:03:45.896 CXX test/cpp_headers/notify.o 00:03:46.154 CXX test/cpp_headers/nvme.o 00:03:46.154 CXX test/cpp_headers/nvme_intel.o 00:03:46.154 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.154 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.154 CXX test/cpp_headers/nvme_spec.o 00:03:46.154 CXX test/cpp_headers/nvme_zns.o 00:03:46.154 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.154 CC examples/nvmf/nvmf/nvmf.o 00:03:46.154 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.154 CXX test/cpp_headers/nvmf.o 00:03:46.413 CXX test/cpp_headers/nvmf_spec.o 00:03:46.413 CXX test/cpp_headers/nvmf_transport.o 00:03:46.413 CXX test/cpp_headers/opal.o 00:03:46.413 CXX test/cpp_headers/opal_spec.o 00:03:46.413 CXX test/cpp_headers/pci_ids.o 00:03:46.413 CXX test/cpp_headers/pipe.o 00:03:46.413 CXX test/cpp_headers/queue.o 00:03:46.413 CXX test/cpp_headers/reduce.o 00:03:46.413 CXX test/cpp_headers/rpc.o 00:03:46.413 LINK cuse 00:03:46.413 CXX test/cpp_headers/scheduler.o 00:03:46.413 CXX test/cpp_headers/scsi.o 00:03:46.671 LINK nvmf 00:03:46.671 CXX test/cpp_headers/scsi_spec.o 00:03:46.671 CXX test/cpp_headers/sock.o 00:03:46.671 CXX test/cpp_headers/stdinc.o 00:03:46.671 CXX test/cpp_headers/string.o 00:03:46.671 CXX test/cpp_headers/thread.o 00:03:46.671 CXX test/cpp_headers/trace.o 00:03:46.671 CXX test/cpp_headers/trace_parser.o 00:03:46.671 CXX test/cpp_headers/tree.o 00:03:46.671 CXX test/cpp_headers/ublk.o 00:03:46.671 CXX test/cpp_headers/util.o 00:03:46.671 CXX test/cpp_headers/uuid.o 00:03:46.671 CXX test/cpp_headers/version.o 00:03:46.930 CXX test/cpp_headers/vfio_user_pci.o 00:03:46.930 CXX test/cpp_headers/vfio_user_spec.o 00:03:46.930 CXX test/cpp_headers/vhost.o 00:03:46.930 CXX test/cpp_headers/vmd.o 00:03:46.930 CXX test/cpp_headers/xor.o 00:03:46.930 CXX test/cpp_headers/zipf.o 00:03:49.463 LINK esnap 00:03:49.722 00:03:49.722 real 1m39.293s 00:03:49.722 user 9m30.164s 00:03:49.722 sys 1m40.051s 00:03:49.722 05:21:29 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:49.722 ************************************ 00:03:49.722 END TEST make 00:03:49.722 ************************************ 00:03:49.722 05:21:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.722 05:21:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.722 05:21:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.722 05:21:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.722 05:21:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.722 05:21:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.722 05:21:29 -- pm/common@44 -- $ pid=5294 00:03:49.722 05:21:29 -- pm/common@50 -- $ kill -TERM 5294 00:03:49.722 05:21:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.722 05:21:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.722 05:21:29 -- pm/common@44 -- $ pid=5296 00:03:49.722 05:21:29 -- pm/common@50 -- $ kill -TERM 5296 00:03:49.722 05:21:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:49.722 05:21:29 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:49.722 05:21:29 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:49.722 05:21:29 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:49.722 05:21:29 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.980 05:21:30 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.980 05:21:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.980 05:21:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.980 05:21:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.980 05:21:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.980 05:21:30 -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.980 05:21:30 -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.981 05:21:30 -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.981 05:21:30 -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.981 05:21:30 -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.981 05:21:30 -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.981 05:21:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.981 05:21:30 -- scripts/common.sh@344 -- # case "$op" in 00:03:49.981 05:21:30 -- scripts/common.sh@345 -- # : 1 00:03:49.981 05:21:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.981 05:21:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.981 05:21:30 -- scripts/common.sh@365 -- # decimal 1 00:03:49.981 05:21:30 -- scripts/common.sh@353 -- # local d=1 00:03:49.981 05:21:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.981 05:21:30 -- scripts/common.sh@355 -- # echo 1 00:03:49.981 05:21:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.981 05:21:30 -- scripts/common.sh@366 -- # decimal 2 00:03:49.981 05:21:30 -- scripts/common.sh@353 -- # local d=2 00:03:49.981 05:21:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.981 05:21:30 -- scripts/common.sh@355 -- # echo 2 00:03:49.981 05:21:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.981 05:21:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.981 05:21:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.981 05:21:30 -- scripts/common.sh@368 -- # return 0 00:03:49.981 05:21:30 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.981 05:21:30 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.981 --rc genhtml_branch_coverage=1 00:03:49.981 --rc genhtml_function_coverage=1 00:03:49.981 --rc genhtml_legend=1 00:03:49.981 --rc geninfo_all_blocks=1 00:03:49.981 --rc geninfo_unexecuted_blocks=1 00:03:49.981 00:03:49.981 ' 00:03:49.981 05:21:30 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.981 --rc genhtml_branch_coverage=1 00:03:49.981 --rc genhtml_function_coverage=1 00:03:49.981 --rc genhtml_legend=1 00:03:49.981 --rc geninfo_all_blocks=1 00:03:49.981 --rc geninfo_unexecuted_blocks=1 00:03:49.981 00:03:49.981 ' 00:03:49.981 05:21:30 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.981 --rc genhtml_branch_coverage=1 00:03:49.981 --rc genhtml_function_coverage=1 00:03:49.981 --rc genhtml_legend=1 00:03:49.981 --rc geninfo_all_blocks=1 00:03:49.981 --rc geninfo_unexecuted_blocks=1 00:03:49.981 00:03:49.981 ' 00:03:49.981 05:21:30 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.981 --rc genhtml_branch_coverage=1 00:03:49.981 --rc genhtml_function_coverage=1 00:03:49.981 --rc genhtml_legend=1 00:03:49.981 --rc geninfo_all_blocks=1 00:03:49.981 --rc geninfo_unexecuted_blocks=1 00:03:49.981 00:03:49.981 ' 00:03:49.981 05:21:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:49.981 05:21:30 -- nvmf/common.sh@7 -- # uname -s 00:03:49.981 05:21:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.981 05:21:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.981 05:21:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.981 05:21:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.981 05:21:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.981 05:21:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.981 05:21:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.981 05:21:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.981 05:21:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.981 05:21:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.981 05:21:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:03:49.981 05:21:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:03:49.981 05:21:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.981 05:21:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.981 05:21:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:49.981 05:21:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.981 05:21:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:49.981 05:21:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.981 05:21:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.981 05:21:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.981 05:21:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.981 05:21:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.981 05:21:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.981 05:21:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.981 05:21:30 -- paths/export.sh@5 -- # export PATH 00:03:49.981 05:21:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.981 05:21:30 -- nvmf/common.sh@51 -- # : 0 00:03:49.981 05:21:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.981 05:21:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.981 05:21:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.981 05:21:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.981 05:21:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.981 05:21:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.981 05:21:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.981 05:21:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.981 05:21:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.981 05:21:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.981 05:21:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.981 05:21:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.981 05:21:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.981 05:21:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.981 05:21:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.981 05:21:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.981 05:21:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.981 05:21:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.981 05:21:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.981 05:21:30 -- spdk/autotest.sh@48 -- # udevadm_pid=56887 00:03:49.981 05:21:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.981 05:21:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.981 05:21:30 -- pm/common@17 -- # local monitor 00:03:49.981 05:21:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.981 05:21:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.981 05:21:30 -- pm/common@21 -- # date +%s 00:03:49.981 05:21:30 -- pm/common@25 -- # sleep 1 00:03:49.981 05:21:30 -- pm/common@21 -- # date +%s 00:03:49.981 05:21:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734326490 00:03:49.981 05:21:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734326490 00:03:50.240 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734326490_collect-vmstat.pm.log 00:03:50.240 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734326490_collect-cpu-load.pm.log 00:03:51.176 05:21:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:51.176 05:21:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:51.177 05:21:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.177 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:03:51.177 05:21:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:51.177 05:21:31 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:51.177 05:21:31 -- common/autotest_common.sh@10 -- # set +x 00:03:51.177 05:21:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:51.177 05:21:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:51.177 05:21:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:51.177 05:21:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:51.177 05:21:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:51.177 05:21:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:51.177 05:21:31 -- common/autotest_common.sh@1457 -- # uname 00:03:51.177 05:21:31 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:51.177 05:21:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:51.177 05:21:31 -- common/autotest_common.sh@1477 -- # uname 00:03:51.177 05:21:31 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:51.177 05:21:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:51.177 05:21:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:51.177 lcov: LCOV version 1.15 00:03:51.177 05:21:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:06.060 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:06.060 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:24.151 05:22:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:24.151 05:22:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.151 05:22:01 -- common/autotest_common.sh@10 -- # set +x 00:04:24.151 05:22:01 -- spdk/autotest.sh@78 -- # rm -f 00:04:24.151 05:22:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.151 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:24.151 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:24.151 05:22:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:24.151 05:22:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:24.151 05:22:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:24.151 05:22:02 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:24.151 05:22:02 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:24.151 05:22:02 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:24.151 05:22:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:24.151 05:22:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:24.151 05:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:24.151 05:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:24.151 05:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:24.151 05:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.151 05:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:24.151 05:22:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:24.151 05:22:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:24.151 05:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:24.151 05:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:24.151 05:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:24.151 05:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:24.151 05:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:24.151 05:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:24.151 05:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:24.151 05:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:24.151 05:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:24.151 05:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:24.151 05:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:24.151 05:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:24.151 05:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:24.151 05:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:24.151 05:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:24.151 05:22:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:24.151 05:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.151 05:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.151 05:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:24.151 05:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:24.151 05:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:24.151 No valid GPT data, bailing 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # pt= 00:04:24.151 05:22:02 -- scripts/common.sh@395 -- # return 1 00:04:24.151 05:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:24.151 1+0 records in 00:04:24.151 1+0 records out 00:04:24.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394537 s, 266 MB/s 00:04:24.151 05:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.151 05:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.151 05:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:24.151 05:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:24.151 05:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:24.151 No valid GPT data, bailing 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # pt= 00:04:24.151 05:22:02 -- scripts/common.sh@395 -- # return 1 00:04:24.151 05:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:24.151 1+0 records in 00:04:24.151 1+0 records out 00:04:24.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451286 s, 232 MB/s 00:04:24.151 05:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.151 05:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.151 05:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:24.151 05:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:24.151 05:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:24.151 No valid GPT data, bailing 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # pt= 00:04:24.151 05:22:02 -- scripts/common.sh@395 -- # return 1 00:04:24.151 05:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:24.151 1+0 records in 00:04:24.151 1+0 records out 00:04:24.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00342366 s, 306 MB/s 00:04:24.151 05:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.151 05:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.151 05:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:24.151 05:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:24.151 05:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:24.151 No valid GPT data, bailing 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:24.151 05:22:02 -- scripts/common.sh@394 -- # pt= 00:04:24.151 05:22:02 -- scripts/common.sh@395 -- # return 1 00:04:24.151 05:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:24.151 1+0 records in 00:04:24.151 1+0 records out 00:04:24.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00400292 s, 262 MB/s 00:04:24.151 05:22:02 -- spdk/autotest.sh@105 -- # sync 00:04:24.151 05:22:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.151 05:22:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.151 05:22:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.719 05:22:04 -- spdk/autotest.sh@111 -- # uname -s 00:04:24.719 05:22:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:24.719 05:22:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:24.719 05:22:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.288 Hugepages 00:04:25.288 node hugesize free / total 00:04:25.288 node0 1048576kB 0 / 0 00:04:25.288 node0 2048kB 0 / 0 00:04:25.288 00:04:25.288 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.288 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.547 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.547 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:25.547 05:22:05 -- spdk/autotest.sh@117 -- # uname -s 00:04:25.547 05:22:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:25.547 05:22:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:25.547 05:22:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.401 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.401 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.401 05:22:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:27.338 05:22:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:27.338 05:22:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:27.338 05:22:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:27.338 05:22:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:27.338 05:22:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:27.338 05:22:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:27.338 05:22:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.338 05:22:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:27.338 05:22:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:27.597 05:22:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:27.597 05:22:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:27.597 05:22:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.856 Waiting for block devices as requested 00:04:27.856 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:27.856 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.115 05:22:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.115 05:22:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:28.115 05:22:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.115 05:22:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.115 05:22:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.115 05:22:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1543 -- # continue 00:04:28.115 05:22:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.115 05:22:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.115 05:22:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:28.115 05:22:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.115 05:22:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.115 05:22:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.115 05:22:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.115 05:22:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.115 05:22:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.115 05:22:08 -- common/autotest_common.sh@1543 -- # continue 00:04:28.115 05:22:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:28.115 05:22:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.115 05:22:08 -- common/autotest_common.sh@10 -- # set +x 00:04:28.115 05:22:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:28.115 05:22:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.115 05:22:08 -- common/autotest_common.sh@10 -- # set +x 00:04:28.115 05:22:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.942 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.942 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.942 05:22:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:28.942 05:22:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.942 05:22:09 -- common/autotest_common.sh@10 -- # set +x 00:04:28.942 05:22:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:28.942 05:22:09 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:28.942 05:22:09 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:28.942 05:22:09 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:28.942 05:22:09 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:28.942 05:22:09 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:28.942 05:22:09 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:28.942 05:22:09 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:28.942 05:22:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:28.942 05:22:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:28.942 05:22:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.942 05:22:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.942 05:22:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:29.201 05:22:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:29.201 05:22:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:29.201 05:22:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.201 05:22:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:29.201 05:22:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.201 05:22:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.201 05:22:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.201 05:22:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:29.201 05:22:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.201 05:22:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.201 05:22:09 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:29.201 05:22:09 -- common/autotest_common.sh@1572 -- # return 0 00:04:29.201 05:22:09 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:29.201 05:22:09 -- common/autotest_common.sh@1580 -- # return 0 00:04:29.201 05:22:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:29.201 05:22:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:29.202 05:22:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.202 05:22:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.202 05:22:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:29.202 05:22:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.202 05:22:09 -- common/autotest_common.sh@10 -- # set +x 00:04:29.202 05:22:09 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:29.202 05:22:09 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:29.202 05:22:09 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:29.202 05:22:09 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.202 05:22:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.202 05:22:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.202 05:22:09 -- common/autotest_common.sh@10 -- # set +x 00:04:29.202 ************************************ 00:04:29.202 START TEST env 00:04:29.202 ************************************ 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.202 * Looking for test storage... 00:04:29.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:29.202 05:22:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.202 05:22:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.202 05:22:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.202 05:22:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.202 05:22:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.202 05:22:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.202 05:22:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.202 05:22:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.202 05:22:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.202 05:22:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.202 05:22:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.202 05:22:09 env -- scripts/common.sh@344 -- # case "$op" in 00:04:29.202 05:22:09 env -- scripts/common.sh@345 -- # : 1 00:04:29.202 05:22:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.202 05:22:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.202 05:22:09 env -- scripts/common.sh@365 -- # decimal 1 00:04:29.202 05:22:09 env -- scripts/common.sh@353 -- # local d=1 00:04:29.202 05:22:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.202 05:22:09 env -- scripts/common.sh@355 -- # echo 1 00:04:29.202 05:22:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.202 05:22:09 env -- scripts/common.sh@366 -- # decimal 2 00:04:29.202 05:22:09 env -- scripts/common.sh@353 -- # local d=2 00:04:29.202 05:22:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.202 05:22:09 env -- scripts/common.sh@355 -- # echo 2 00:04:29.202 05:22:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.202 05:22:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.202 05:22:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.202 05:22:09 env -- scripts/common.sh@368 -- # return 0 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.202 --rc genhtml_branch_coverage=1 00:04:29.202 --rc genhtml_function_coverage=1 00:04:29.202 --rc genhtml_legend=1 00:04:29.202 --rc geninfo_all_blocks=1 00:04:29.202 --rc geninfo_unexecuted_blocks=1 00:04:29.202 00:04:29.202 ' 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.202 --rc genhtml_branch_coverage=1 00:04:29.202 --rc genhtml_function_coverage=1 00:04:29.202 --rc genhtml_legend=1 00:04:29.202 --rc geninfo_all_blocks=1 00:04:29.202 --rc geninfo_unexecuted_blocks=1 00:04:29.202 00:04:29.202 ' 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.202 --rc genhtml_branch_coverage=1 00:04:29.202 --rc genhtml_function_coverage=1 00:04:29.202 --rc genhtml_legend=1 00:04:29.202 --rc geninfo_all_blocks=1 00:04:29.202 --rc geninfo_unexecuted_blocks=1 00:04:29.202 00:04:29.202 ' 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.202 --rc genhtml_branch_coverage=1 00:04:29.202 --rc genhtml_function_coverage=1 00:04:29.202 --rc genhtml_legend=1 00:04:29.202 --rc geninfo_all_blocks=1 00:04:29.202 --rc geninfo_unexecuted_blocks=1 00:04:29.202 00:04:29.202 ' 00:04:29.202 05:22:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.202 05:22:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.202 05:22:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.461 ************************************ 00:04:29.461 START TEST env_memory 00:04:29.461 ************************************ 00:04:29.461 05:22:09 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.461 00:04:29.461 00:04:29.461 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.461 http://cunit.sourceforge.net/ 00:04:29.461 00:04:29.461 00:04:29.461 Suite: memory 00:04:29.461 Test: alloc and free memory map ...[2024-12-16 05:22:09.532787] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:29.461 passed 00:04:29.461 Test: mem map translation ...[2024-12-16 05:22:09.593347] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:29.461 [2024-12-16 05:22:09.593426] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:29.461 [2024-12-16 05:22:09.593527] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:29.461 [2024-12-16 05:22:09.593560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:29.461 passed 00:04:29.461 Test: mem map registration ...[2024-12-16 05:22:09.694983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:29.461 [2024-12-16 05:22:09.695058] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:29.720 passed 00:04:29.720 Test: mem map adjacent registrations ...passed 00:04:29.720 00:04:29.720 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.720 suites 1 1 n/a 0 0 00:04:29.720 tests 4 4 4 0 0 00:04:29.720 asserts 152 152 152 0 n/a 00:04:29.720 00:04:29.720 Elapsed time = 0.335 seconds 00:04:29.720 00:04:29.720 real 0m0.375s 00:04:29.720 user 0m0.343s 00:04:29.720 sys 0m0.025s 00:04:29.720 05:22:09 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.720 05:22:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:29.720 ************************************ 00:04:29.720 END TEST env_memory 00:04:29.720 ************************************ 00:04:29.720 05:22:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:29.720 05:22:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.720 05:22:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.720 05:22:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.720 ************************************ 00:04:29.720 START TEST env_vtophys 00:04:29.720 ************************************ 00:04:29.720 05:22:09 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:29.720 EAL: lib.eal log level changed from notice to debug 00:04:29.720 EAL: Detected lcore 0 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 1 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 2 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 3 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 4 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 5 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 6 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 7 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 8 as core 0 on socket 0 00:04:29.720 EAL: Detected lcore 9 as core 0 on socket 0 00:04:29.720 EAL: Maximum logical cores by configuration: 128 00:04:29.720 EAL: Detected CPU lcores: 10 00:04:29.720 EAL: Detected NUMA nodes: 1 00:04:29.720 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:29.720 EAL: Detected shared linkage of DPDK 00:04:29.979 EAL: No shared files mode enabled, IPC will be disabled 00:04:29.979 EAL: Selected IOVA mode 'PA' 00:04:29.979 EAL: Probing VFIO support... 00:04:29.979 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:29.979 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:29.979 EAL: Ask a virtual area of 0x2e000 bytes 00:04:29.979 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:29.979 EAL: Setting up physically contiguous memory... 00:04:29.979 EAL: Setting maximum number of open files to 524288 00:04:29.979 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:29.979 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:29.979 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.979 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:29.979 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.979 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.979 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:29.979 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:29.979 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.979 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:29.979 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.979 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.979 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:29.979 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:29.979 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.979 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:29.979 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.979 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.979 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:29.979 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:29.979 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.979 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:29.979 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.979 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.979 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:29.979 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:29.979 EAL: Hugepages will be freed exactly as allocated. 00:04:29.979 EAL: No shared files mode enabled, IPC is disabled 00:04:29.979 EAL: No shared files mode enabled, IPC is disabled 00:04:29.979 EAL: TSC frequency is ~2200000 KHz 00:04:29.979 EAL: Main lcore 0 is ready (tid=7efc822a5a40;cpuset=[0]) 00:04:29.979 EAL: Trying to obtain current memory policy. 00:04:29.979 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.979 EAL: Restoring previous memory policy: 0 00:04:29.979 EAL: request: mp_malloc_sync 00:04:29.979 EAL: No shared files mode enabled, IPC is disabled 00:04:29.979 EAL: Heap on socket 0 was expanded by 2MB 00:04:29.979 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:29.979 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:29.979 EAL: Mem event callback 'spdk:(nil)' registered 00:04:29.979 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:29.979 00:04:29.979 00:04:29.979 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.979 http://cunit.sourceforge.net/ 00:04:29.979 00:04:29.979 00:04:29.979 Suite: components_suite 00:04:30.547 Test: vtophys_malloc_test ...passed 00:04:30.547 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.547 EAL: Restoring previous memory policy: 4 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.547 EAL: Trying to obtain current memory policy. 00:04:30.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.547 EAL: Restoring previous memory policy: 4 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.547 EAL: Trying to obtain current memory policy. 00:04:30.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.547 EAL: Restoring previous memory policy: 4 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.547 EAL: Trying to obtain current memory policy. 00:04:30.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.547 EAL: Restoring previous memory policy: 4 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.547 EAL: Trying to obtain current memory policy. 00:04:30.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.547 EAL: Restoring previous memory policy: 4 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.547 EAL: Trying to obtain current memory policy. 00:04:30.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.547 EAL: Restoring previous memory policy: 4 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.547 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.547 EAL: request: mp_malloc_sync 00:04:30.547 EAL: No shared files mode enabled, IPC is disabled 00:04:30.547 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.806 EAL: Trying to obtain current memory policy. 00:04:30.806 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.806 EAL: Restoring previous memory policy: 4 00:04:30.806 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.806 EAL: request: mp_malloc_sync 00:04:30.806 EAL: No shared files mode enabled, IPC is disabled 00:04:30.806 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.806 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.065 EAL: Trying to obtain current memory policy. 00:04:31.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.065 EAL: Restoring previous memory policy: 4 00:04:31.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.065 EAL: request: mp_malloc_sync 00:04:31.065 EAL: No shared files mode enabled, IPC is disabled 00:04:31.065 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.633 EAL: request: mp_malloc_sync 00:04:31.633 EAL: No shared files mode enabled, IPC is disabled 00:04:31.633 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.891 EAL: Trying to obtain current memory policy. 00:04:31.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.891 EAL: Restoring previous memory policy: 4 00:04:31.891 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.891 EAL: request: mp_malloc_sync 00:04:31.891 EAL: No shared files mode enabled, IPC is disabled 00:04:31.891 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.827 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.827 EAL: request: mp_malloc_sync 00:04:32.827 EAL: No shared files mode enabled, IPC is disabled 00:04:32.827 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.395 EAL: Trying to obtain current memory policy. 00:04:33.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.395 EAL: Restoring previous memory policy: 4 00:04:33.395 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.395 EAL: request: mp_malloc_sync 00:04:33.395 EAL: No shared files mode enabled, IPC is disabled 00:04:33.395 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.030 EAL: request: mp_malloc_sync 00:04:35.030 EAL: No shared files mode enabled, IPC is disabled 00:04:35.030 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.965 passed 00:04:35.965 00:04:35.966 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.966 suites 1 1 n/a 0 0 00:04:35.966 tests 2 2 2 0 0 00:04:35.966 asserts 5663 5663 5663 0 n/a 00:04:35.966 00:04:35.966 Elapsed time = 6.035 seconds 00:04:36.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.224 EAL: request: mp_malloc_sync 00:04:36.224 EAL: No shared files mode enabled, IPC is disabled 00:04:36.224 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.224 EAL: No shared files mode enabled, IPC is disabled 00:04:36.224 EAL: No shared files mode enabled, IPC is disabled 00:04:36.224 EAL: No shared files mode enabled, IPC is disabled 00:04:36.224 00:04:36.224 real 0m6.376s 00:04:36.224 user 0m5.502s 00:04:36.224 sys 0m0.706s 00:04:36.224 05:22:16 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.224 05:22:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.224 ************************************ 00:04:36.224 END TEST env_vtophys 00:04:36.224 ************************************ 00:04:36.224 05:22:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.224 05:22:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.224 05:22:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.224 05:22:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.224 ************************************ 00:04:36.224 START TEST env_pci 00:04:36.224 ************************************ 00:04:36.224 05:22:16 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.224 00:04:36.224 00:04:36.224 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.224 http://cunit.sourceforge.net/ 00:04:36.224 00:04:36.224 00:04:36.224 Suite: pci 00:04:36.224 Test: pci_hook ...[2024-12-16 05:22:16.358035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59171 has claimed it 00:04:36.224 passed 00:04:36.224 00:04:36.224 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.224 suites 1 1 n/a 0 0 00:04:36.224 tests 1 1 1 0 0 00:04:36.224 asserts 25 25 25 0 n/a 00:04:36.224 00:04:36.224 Elapsed time = 0.007 seconds 00:04:36.224 EAL: Cannot find device (10000:00:01.0) 00:04:36.224 EAL: Failed to attach device on primary process 00:04:36.224 00:04:36.224 real 0m0.080s 00:04:36.224 user 0m0.049s 00:04:36.224 sys 0m0.031s 00:04:36.224 05:22:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.224 ************************************ 00:04:36.224 END TEST env_pci 00:04:36.224 05:22:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.224 ************************************ 00:04:36.224 05:22:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:36.225 05:22:16 env -- env/env.sh@15 -- # uname 00:04:36.225 05:22:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:36.225 05:22:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:36.225 05:22:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.225 05:22:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:36.225 05:22:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.225 05:22:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.225 ************************************ 00:04:36.225 START TEST env_dpdk_post_init 00:04:36.225 ************************************ 00:04:36.225 05:22:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.484 EAL: Detected CPU lcores: 10 00:04:36.484 EAL: Detected NUMA nodes: 1 00:04:36.484 EAL: Detected shared linkage of DPDK 00:04:36.484 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.484 EAL: Selected IOVA mode 'PA' 00:04:36.484 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.484 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:36.484 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:36.744 Starting DPDK initialization... 00:04:36.744 Starting SPDK post initialization... 00:04:36.744 SPDK NVMe probe 00:04:36.744 Attaching to 0000:00:10.0 00:04:36.744 Attaching to 0000:00:11.0 00:04:36.744 Attached to 0000:00:10.0 00:04:36.744 Attached to 0000:00:11.0 00:04:36.744 Cleaning up... 00:04:36.744 00:04:36.744 real 0m0.293s 00:04:36.744 user 0m0.111s 00:04:36.744 sys 0m0.081s 00:04:36.744 05:22:16 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.744 05:22:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.744 ************************************ 00:04:36.744 END TEST env_dpdk_post_init 00:04:36.744 ************************************ 00:04:36.744 05:22:16 env -- env/env.sh@26 -- # uname 00:04:36.744 05:22:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.744 05:22:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.744 05:22:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.744 05:22:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.744 05:22:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.744 ************************************ 00:04:36.744 START TEST env_mem_callbacks 00:04:36.744 ************************************ 00:04:36.744 05:22:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.744 EAL: Detected CPU lcores: 10 00:04:36.744 EAL: Detected NUMA nodes: 1 00:04:36.744 EAL: Detected shared linkage of DPDK 00:04:36.744 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.744 EAL: Selected IOVA mode 'PA' 00:04:36.744 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.744 00:04:36.744 00:04:36.744 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.744 http://cunit.sourceforge.net/ 00:04:36.744 00:04:36.744 00:04:36.744 Suite: memory 00:04:36.744 Test: test ... 00:04:36.744 register 0x200000200000 2097152 00:04:36.744 malloc 3145728 00:04:36.744 register 0x200000400000 4194304 00:04:36.744 buf 0x2000004fffc0 len 3145728 PASSED 00:04:36.744 malloc 64 00:04:36.744 buf 0x2000004ffec0 len 64 PASSED 00:04:36.744 malloc 4194304 00:04:36.744 register 0x200000800000 6291456 00:04:36.744 buf 0x2000009fffc0 len 4194304 PASSED 00:04:36.744 free 0x2000004fffc0 3145728 00:04:36.744 free 0x2000004ffec0 64 00:04:36.744 unregister 0x200000400000 4194304 PASSED 00:04:36.744 free 0x2000009fffc0 4194304 00:04:37.003 unregister 0x200000800000 6291456 PASSED 00:04:37.003 malloc 8388608 00:04:37.003 register 0x200000400000 10485760 00:04:37.003 buf 0x2000005fffc0 len 8388608 PASSED 00:04:37.003 free 0x2000005fffc0 8388608 00:04:37.003 unregister 0x200000400000 10485760 PASSED 00:04:37.003 passed 00:04:37.003 00:04:37.003 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.003 suites 1 1 n/a 0 0 00:04:37.003 tests 1 1 1 0 0 00:04:37.003 asserts 15 15 15 0 n/a 00:04:37.003 00:04:37.003 Elapsed time = 0.053 seconds 00:04:37.003 00:04:37.003 real 0m0.242s 00:04:37.003 user 0m0.088s 00:04:37.003 sys 0m0.051s 00:04:37.003 05:22:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.003 05:22:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.003 ************************************ 00:04:37.003 END TEST env_mem_callbacks 00:04:37.003 ************************************ 00:04:37.003 00:04:37.003 real 0m7.829s 00:04:37.003 user 0m6.286s 00:04:37.003 sys 0m1.146s 00:04:37.003 05:22:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.003 05:22:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.003 ************************************ 00:04:37.003 END TEST env 00:04:37.003 ************************************ 00:04:37.003 05:22:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.003 05:22:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.003 05:22:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.003 05:22:17 -- common/autotest_common.sh@10 -- # set +x 00:04:37.003 ************************************ 00:04:37.003 START TEST rpc 00:04:37.003 ************************************ 00:04:37.003 05:22:17 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.003 * Looking for test storage... 00:04:37.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.003 05:22:17 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.003 05:22:17 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.003 05:22:17 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.262 05:22:17 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.262 05:22:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.262 05:22:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.262 05:22:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.262 05:22:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.262 05:22:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.262 05:22:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.262 05:22:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.262 05:22:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.262 05:22:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.262 05:22:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.262 05:22:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.262 05:22:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.262 05:22:17 rpc -- scripts/common.sh@345 -- # : 1 00:04:37.262 05:22:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.262 05:22:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.262 05:22:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.262 05:22:17 rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.262 05:22:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.263 05:22:17 rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.263 05:22:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.263 05:22:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.263 05:22:17 rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.263 05:22:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.263 05:22:17 rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.263 05:22:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.263 05:22:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.263 05:22:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.263 05:22:17 rpc -- scripts/common.sh@368 -- # return 0 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.263 --rc genhtml_branch_coverage=1 00:04:37.263 --rc genhtml_function_coverage=1 00:04:37.263 --rc genhtml_legend=1 00:04:37.263 --rc geninfo_all_blocks=1 00:04:37.263 --rc geninfo_unexecuted_blocks=1 00:04:37.263 00:04:37.263 ' 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.263 --rc genhtml_branch_coverage=1 00:04:37.263 --rc genhtml_function_coverage=1 00:04:37.263 --rc genhtml_legend=1 00:04:37.263 --rc geninfo_all_blocks=1 00:04:37.263 --rc geninfo_unexecuted_blocks=1 00:04:37.263 00:04:37.263 ' 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.263 --rc genhtml_branch_coverage=1 00:04:37.263 --rc genhtml_function_coverage=1 00:04:37.263 --rc genhtml_legend=1 00:04:37.263 --rc geninfo_all_blocks=1 00:04:37.263 --rc geninfo_unexecuted_blocks=1 00:04:37.263 00:04:37.263 ' 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.263 --rc genhtml_branch_coverage=1 00:04:37.263 --rc genhtml_function_coverage=1 00:04:37.263 --rc genhtml_legend=1 00:04:37.263 --rc geninfo_all_blocks=1 00:04:37.263 --rc geninfo_unexecuted_blocks=1 00:04:37.263 00:04:37.263 ' 00:04:37.263 05:22:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59298 00:04:37.263 05:22:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.263 05:22:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:37.263 05:22:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59298 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 59298 ']' 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.263 05:22:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.263 [2024-12-16 05:22:17.487433] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:37.263 [2024-12-16 05:22:17.487839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59298 ] 00:04:37.522 [2024-12-16 05:22:17.672142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.522 [2024-12-16 05:22:17.767740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:37.522 [2024-12-16 05:22:17.768033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59298' to capture a snapshot of events at runtime. 00:04:37.522 [2024-12-16 05:22:17.768236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:37.522 [2024-12-16 05:22:17.768444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:37.522 [2024-12-16 05:22:17.768577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59298 for offline analysis/debug. 00:04:37.522 [2024-12-16 05:22:17.770075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.781 [2024-12-16 05:22:17.961857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:38.349 05:22:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.349 05:22:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:38.349 05:22:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.349 05:22:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.349 05:22:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:38.349 05:22:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:38.349 05:22:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.349 05:22:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.349 05:22:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.349 ************************************ 00:04:38.349 START TEST rpc_integrity 00:04:38.349 ************************************ 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.349 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.349 { 00:04:38.349 "name": "Malloc0", 00:04:38.349 "aliases": [ 00:04:38.349 "c2186625-9eb8-43e6-978b-7f4f00c0b899" 00:04:38.349 ], 00:04:38.349 "product_name": "Malloc disk", 00:04:38.349 "block_size": 512, 00:04:38.349 "num_blocks": 16384, 00:04:38.349 "uuid": "c2186625-9eb8-43e6-978b-7f4f00c0b899", 00:04:38.349 "assigned_rate_limits": { 00:04:38.349 "rw_ios_per_sec": 0, 00:04:38.349 "rw_mbytes_per_sec": 0, 00:04:38.349 "r_mbytes_per_sec": 0, 00:04:38.349 "w_mbytes_per_sec": 0 00:04:38.349 }, 00:04:38.349 "claimed": false, 00:04:38.349 "zoned": false, 00:04:38.349 "supported_io_types": { 00:04:38.349 "read": true, 00:04:38.349 "write": true, 00:04:38.349 "unmap": true, 00:04:38.349 "flush": true, 00:04:38.349 "reset": true, 00:04:38.349 "nvme_admin": false, 00:04:38.349 "nvme_io": false, 00:04:38.349 "nvme_io_md": false, 00:04:38.349 "write_zeroes": true, 00:04:38.349 "zcopy": true, 00:04:38.349 "get_zone_info": false, 00:04:38.349 "zone_management": false, 00:04:38.349 "zone_append": false, 00:04:38.349 "compare": false, 00:04:38.349 "compare_and_write": false, 00:04:38.349 "abort": true, 00:04:38.349 "seek_hole": false, 00:04:38.349 "seek_data": false, 00:04:38.349 "copy": true, 00:04:38.349 "nvme_iov_md": false 00:04:38.349 }, 00:04:38.349 "memory_domains": [ 00:04:38.349 { 00:04:38.349 "dma_device_id": "system", 00:04:38.349 "dma_device_type": 1 00:04:38.349 }, 00:04:38.349 { 00:04:38.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.349 "dma_device_type": 2 00:04:38.349 } 00:04:38.349 ], 00:04:38.349 "driver_specific": {} 00:04:38.349 } 00:04:38.349 ]' 00:04:38.349 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 [2024-12-16 05:22:18.613276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:38.609 [2024-12-16 05:22:18.613349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.609 [2024-12-16 05:22:18.613428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:04:38.609 [2024-12-16 05:22:18.613452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.609 [2024-12-16 05:22:18.616225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.609 [2024-12-16 05:22:18.616265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.609 Passthru0 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.609 { 00:04:38.609 "name": "Malloc0", 00:04:38.609 "aliases": [ 00:04:38.609 "c2186625-9eb8-43e6-978b-7f4f00c0b899" 00:04:38.609 ], 00:04:38.609 "product_name": "Malloc disk", 00:04:38.609 "block_size": 512, 00:04:38.609 "num_blocks": 16384, 00:04:38.609 "uuid": "c2186625-9eb8-43e6-978b-7f4f00c0b899", 00:04:38.609 "assigned_rate_limits": { 00:04:38.609 "rw_ios_per_sec": 0, 00:04:38.609 "rw_mbytes_per_sec": 0, 00:04:38.609 "r_mbytes_per_sec": 0, 00:04:38.609 "w_mbytes_per_sec": 0 00:04:38.609 }, 00:04:38.609 "claimed": true, 00:04:38.609 "claim_type": "exclusive_write", 00:04:38.609 "zoned": false, 00:04:38.609 "supported_io_types": { 00:04:38.609 "read": true, 00:04:38.609 "write": true, 00:04:38.609 "unmap": true, 00:04:38.609 "flush": true, 00:04:38.609 "reset": true, 00:04:38.609 "nvme_admin": false, 00:04:38.609 "nvme_io": false, 00:04:38.609 "nvme_io_md": false, 00:04:38.609 "write_zeroes": true, 00:04:38.609 "zcopy": true, 00:04:38.609 "get_zone_info": false, 00:04:38.609 "zone_management": false, 00:04:38.609 "zone_append": false, 00:04:38.609 "compare": false, 00:04:38.609 "compare_and_write": false, 00:04:38.609 "abort": true, 00:04:38.609 "seek_hole": false, 00:04:38.609 "seek_data": false, 00:04:38.609 "copy": true, 00:04:38.609 "nvme_iov_md": false 00:04:38.609 }, 00:04:38.609 "memory_domains": [ 00:04:38.609 { 00:04:38.609 "dma_device_id": "system", 00:04:38.609 "dma_device_type": 1 00:04:38.609 }, 00:04:38.609 { 00:04:38.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.609 "dma_device_type": 2 00:04:38.609 } 00:04:38.609 ], 00:04:38.609 "driver_specific": {} 00:04:38.609 }, 00:04:38.609 { 00:04:38.609 "name": "Passthru0", 00:04:38.609 "aliases": [ 00:04:38.609 "5c65a5a7-b606-5086-b5e9-0e9777027409" 00:04:38.609 ], 00:04:38.609 "product_name": "passthru", 00:04:38.609 "block_size": 512, 00:04:38.609 "num_blocks": 16384, 00:04:38.609 "uuid": "5c65a5a7-b606-5086-b5e9-0e9777027409", 00:04:38.609 "assigned_rate_limits": { 00:04:38.609 "rw_ios_per_sec": 0, 00:04:38.609 "rw_mbytes_per_sec": 0, 00:04:38.609 "r_mbytes_per_sec": 0, 00:04:38.609 "w_mbytes_per_sec": 0 00:04:38.609 }, 00:04:38.609 "claimed": false, 00:04:38.609 "zoned": false, 00:04:38.609 "supported_io_types": { 00:04:38.609 "read": true, 00:04:38.609 "write": true, 00:04:38.609 "unmap": true, 00:04:38.609 "flush": true, 00:04:38.609 "reset": true, 00:04:38.609 "nvme_admin": false, 00:04:38.609 "nvme_io": false, 00:04:38.609 "nvme_io_md": false, 00:04:38.609 "write_zeroes": true, 00:04:38.609 "zcopy": true, 00:04:38.609 "get_zone_info": false, 00:04:38.609 "zone_management": false, 00:04:38.609 "zone_append": false, 00:04:38.609 "compare": false, 00:04:38.609 "compare_and_write": false, 00:04:38.609 "abort": true, 00:04:38.609 "seek_hole": false, 00:04:38.609 "seek_data": false, 00:04:38.609 "copy": true, 00:04:38.609 "nvme_iov_md": false 00:04:38.609 }, 00:04:38.609 "memory_domains": [ 00:04:38.609 { 00:04:38.609 "dma_device_id": "system", 00:04:38.609 "dma_device_type": 1 00:04:38.609 }, 00:04:38.609 { 00:04:38.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.609 "dma_device_type": 2 00:04:38.609 } 00:04:38.609 ], 00:04:38.609 "driver_specific": { 00:04:38.609 "passthru": { 00:04:38.609 "name": "Passthru0", 00:04:38.609 "base_bdev_name": "Malloc0" 00:04:38.609 } 00:04:38.609 } 00:04:38.609 } 00:04:38.609 ]' 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.609 ************************************ 00:04:38.609 END TEST rpc_integrity 00:04:38.609 ************************************ 00:04:38.609 05:22:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.609 00:04:38.609 real 0m0.326s 00:04:38.609 user 0m0.202s 00:04:38.609 sys 0m0.034s 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 05:22:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:38.609 05:22:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.609 05:22:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.609 05:22:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 ************************************ 00:04:38.609 START TEST rpc_plugins 00:04:38.609 ************************************ 00:04:38.609 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:38.609 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:38.609 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.609 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.609 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:38.609 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:38.609 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.609 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.869 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.869 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.869 { 00:04:38.869 "name": "Malloc1", 00:04:38.869 "aliases": [ 00:04:38.869 "ef6a36e6-1c0d-4b9c-965a-e262db092a29" 00:04:38.869 ], 00:04:38.869 "product_name": "Malloc disk", 00:04:38.869 "block_size": 4096, 00:04:38.869 "num_blocks": 256, 00:04:38.869 "uuid": "ef6a36e6-1c0d-4b9c-965a-e262db092a29", 00:04:38.869 "assigned_rate_limits": { 00:04:38.869 "rw_ios_per_sec": 0, 00:04:38.869 "rw_mbytes_per_sec": 0, 00:04:38.869 "r_mbytes_per_sec": 0, 00:04:38.869 "w_mbytes_per_sec": 0 00:04:38.869 }, 00:04:38.869 "claimed": false, 00:04:38.869 "zoned": false, 00:04:38.869 "supported_io_types": { 00:04:38.869 "read": true, 00:04:38.869 "write": true, 00:04:38.869 "unmap": true, 00:04:38.869 "flush": true, 00:04:38.869 "reset": true, 00:04:38.869 "nvme_admin": false, 00:04:38.869 "nvme_io": false, 00:04:38.869 "nvme_io_md": false, 00:04:38.869 "write_zeroes": true, 00:04:38.869 "zcopy": true, 00:04:38.869 "get_zone_info": false, 00:04:38.869 "zone_management": false, 00:04:38.869 "zone_append": false, 00:04:38.869 "compare": false, 00:04:38.869 "compare_and_write": false, 00:04:38.869 "abort": true, 00:04:38.869 "seek_hole": false, 00:04:38.869 "seek_data": false, 00:04:38.869 "copy": true, 00:04:38.869 "nvme_iov_md": false 00:04:38.869 }, 00:04:38.869 "memory_domains": [ 00:04:38.869 { 00:04:38.869 "dma_device_id": "system", 00:04:38.869 "dma_device_type": 1 00:04:38.869 }, 00:04:38.869 { 00:04:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.869 "dma_device_type": 2 00:04:38.869 } 00:04:38.869 ], 00:04:38.869 "driver_specific": {} 00:04:38.869 } 00:04:38.869 ]' 00:04:38.869 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:38.869 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.869 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.869 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.869 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.869 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.869 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.869 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.869 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.869 05:22:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.869 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.869 05:22:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.869 ************************************ 00:04:38.869 END TEST rpc_plugins 00:04:38.869 ************************************ 00:04:38.869 05:22:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.869 00:04:38.869 real 0m0.168s 00:04:38.869 user 0m0.112s 00:04:38.869 sys 0m0.018s 00:04:38.869 05:22:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.869 05:22:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.869 05:22:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.869 05:22:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.869 05:22:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.869 05:22:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.869 ************************************ 00:04:38.869 START TEST rpc_trace_cmd_test 00:04:38.869 ************************************ 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.869 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59298", 00:04:38.869 "tpoint_group_mask": "0x8", 00:04:38.869 "iscsi_conn": { 00:04:38.869 "mask": "0x2", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "scsi": { 00:04:38.869 "mask": "0x4", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "bdev": { 00:04:38.869 "mask": "0x8", 00:04:38.869 "tpoint_mask": "0xffffffffffffffff" 00:04:38.869 }, 00:04:38.869 "nvmf_rdma": { 00:04:38.869 "mask": "0x10", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "nvmf_tcp": { 00:04:38.869 "mask": "0x20", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "ftl": { 00:04:38.869 "mask": "0x40", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "blobfs": { 00:04:38.869 "mask": "0x80", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "dsa": { 00:04:38.869 "mask": "0x200", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "thread": { 00:04:38.869 "mask": "0x400", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "nvme_pcie": { 00:04:38.869 "mask": "0x800", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "iaa": { 00:04:38.869 "mask": "0x1000", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "nvme_tcp": { 00:04:38.869 "mask": "0x2000", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "bdev_nvme": { 00:04:38.869 "mask": "0x4000", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "sock": { 00:04:38.869 "mask": "0x8000", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "blob": { 00:04:38.869 "mask": "0x10000", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "bdev_raid": { 00:04:38.869 "mask": "0x20000", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 }, 00:04:38.869 "scheduler": { 00:04:38.869 "mask": "0x40000", 00:04:38.869 "tpoint_mask": "0x0" 00:04:38.869 } 00:04:38.869 }' 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:38.869 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.128 ************************************ 00:04:39.128 END TEST rpc_trace_cmd_test 00:04:39.128 ************************************ 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.128 00:04:39.128 real 0m0.231s 00:04:39.128 user 0m0.197s 00:04:39.128 sys 0m0.026s 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.128 05:22:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.128 05:22:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.128 05:22:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.128 05:22:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.128 05:22:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.128 05:22:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.128 05:22:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.128 ************************************ 00:04:39.128 START TEST rpc_daemon_integrity 00:04:39.128 ************************************ 00:04:39.128 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.128 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.128 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.128 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.128 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.128 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.128 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.387 { 00:04:39.387 "name": "Malloc2", 00:04:39.387 "aliases": [ 00:04:39.387 "2040d251-30c7-4749-9b9f-7b8c12e879ad" 00:04:39.387 ], 00:04:39.387 "product_name": "Malloc disk", 00:04:39.387 "block_size": 512, 00:04:39.387 "num_blocks": 16384, 00:04:39.387 "uuid": "2040d251-30c7-4749-9b9f-7b8c12e879ad", 00:04:39.387 "assigned_rate_limits": { 00:04:39.387 "rw_ios_per_sec": 0, 00:04:39.387 "rw_mbytes_per_sec": 0, 00:04:39.387 "r_mbytes_per_sec": 0, 00:04:39.387 "w_mbytes_per_sec": 0 00:04:39.387 }, 00:04:39.387 "claimed": false, 00:04:39.387 "zoned": false, 00:04:39.387 "supported_io_types": { 00:04:39.387 "read": true, 00:04:39.387 "write": true, 00:04:39.387 "unmap": true, 00:04:39.387 "flush": true, 00:04:39.387 "reset": true, 00:04:39.387 "nvme_admin": false, 00:04:39.387 "nvme_io": false, 00:04:39.387 "nvme_io_md": false, 00:04:39.387 "write_zeroes": true, 00:04:39.387 "zcopy": true, 00:04:39.387 "get_zone_info": false, 00:04:39.387 "zone_management": false, 00:04:39.387 "zone_append": false, 00:04:39.387 "compare": false, 00:04:39.387 "compare_and_write": false, 00:04:39.387 "abort": true, 00:04:39.387 "seek_hole": false, 00:04:39.387 "seek_data": false, 00:04:39.387 "copy": true, 00:04:39.387 "nvme_iov_md": false 00:04:39.387 }, 00:04:39.387 "memory_domains": [ 00:04:39.387 { 00:04:39.387 "dma_device_id": "system", 00:04:39.387 "dma_device_type": 1 00:04:39.387 }, 00:04:39.387 { 00:04:39.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.387 "dma_device_type": 2 00:04:39.387 } 00:04:39.387 ], 00:04:39.387 "driver_specific": {} 00:04:39.387 } 00:04:39.387 ]' 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.387 [2024-12-16 05:22:19.512514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.387 [2024-12-16 05:22:19.512589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.387 [2024-12-16 05:22:19.512654] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:04:39.387 [2024-12-16 05:22:19.512672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.387 [2024-12-16 05:22:19.515371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.387 [2024-12-16 05:22:19.515411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.387 Passthru0 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.387 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.388 { 00:04:39.388 "name": "Malloc2", 00:04:39.388 "aliases": [ 00:04:39.388 "2040d251-30c7-4749-9b9f-7b8c12e879ad" 00:04:39.388 ], 00:04:39.388 "product_name": "Malloc disk", 00:04:39.388 "block_size": 512, 00:04:39.388 "num_blocks": 16384, 00:04:39.388 "uuid": "2040d251-30c7-4749-9b9f-7b8c12e879ad", 00:04:39.388 "assigned_rate_limits": { 00:04:39.388 "rw_ios_per_sec": 0, 00:04:39.388 "rw_mbytes_per_sec": 0, 00:04:39.388 "r_mbytes_per_sec": 0, 00:04:39.388 "w_mbytes_per_sec": 0 00:04:39.388 }, 00:04:39.388 "claimed": true, 00:04:39.388 "claim_type": "exclusive_write", 00:04:39.388 "zoned": false, 00:04:39.388 "supported_io_types": { 00:04:39.388 "read": true, 00:04:39.388 "write": true, 00:04:39.388 "unmap": true, 00:04:39.388 "flush": true, 00:04:39.388 "reset": true, 00:04:39.388 "nvme_admin": false, 00:04:39.388 "nvme_io": false, 00:04:39.388 "nvme_io_md": false, 00:04:39.388 "write_zeroes": true, 00:04:39.388 "zcopy": true, 00:04:39.388 "get_zone_info": false, 00:04:39.388 "zone_management": false, 00:04:39.388 "zone_append": false, 00:04:39.388 "compare": false, 00:04:39.388 "compare_and_write": false, 00:04:39.388 "abort": true, 00:04:39.388 "seek_hole": false, 00:04:39.388 "seek_data": false, 00:04:39.388 "copy": true, 00:04:39.388 "nvme_iov_md": false 00:04:39.388 }, 00:04:39.388 "memory_domains": [ 00:04:39.388 { 00:04:39.388 "dma_device_id": "system", 00:04:39.388 "dma_device_type": 1 00:04:39.388 }, 00:04:39.388 { 00:04:39.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.388 "dma_device_type": 2 00:04:39.388 } 00:04:39.388 ], 00:04:39.388 "driver_specific": {} 00:04:39.388 }, 00:04:39.388 { 00:04:39.388 "name": "Passthru0", 00:04:39.388 "aliases": [ 00:04:39.388 "49e6b36c-7fe3-54e5-8c0e-efb302bc21fb" 00:04:39.388 ], 00:04:39.388 "product_name": "passthru", 00:04:39.388 "block_size": 512, 00:04:39.388 "num_blocks": 16384, 00:04:39.388 "uuid": "49e6b36c-7fe3-54e5-8c0e-efb302bc21fb", 00:04:39.388 "assigned_rate_limits": { 00:04:39.388 "rw_ios_per_sec": 0, 00:04:39.388 "rw_mbytes_per_sec": 0, 00:04:39.388 "r_mbytes_per_sec": 0, 00:04:39.388 "w_mbytes_per_sec": 0 00:04:39.388 }, 00:04:39.388 "claimed": false, 00:04:39.388 "zoned": false, 00:04:39.388 "supported_io_types": { 00:04:39.388 "read": true, 00:04:39.388 "write": true, 00:04:39.388 "unmap": true, 00:04:39.388 "flush": true, 00:04:39.388 "reset": true, 00:04:39.388 "nvme_admin": false, 00:04:39.388 "nvme_io": false, 00:04:39.388 "nvme_io_md": false, 00:04:39.388 "write_zeroes": true, 00:04:39.388 "zcopy": true, 00:04:39.388 "get_zone_info": false, 00:04:39.388 "zone_management": false, 00:04:39.388 "zone_append": false, 00:04:39.388 "compare": false, 00:04:39.388 "compare_and_write": false, 00:04:39.388 "abort": true, 00:04:39.388 "seek_hole": false, 00:04:39.388 "seek_data": false, 00:04:39.388 "copy": true, 00:04:39.388 "nvme_iov_md": false 00:04:39.388 }, 00:04:39.388 "memory_domains": [ 00:04:39.388 { 00:04:39.388 "dma_device_id": "system", 00:04:39.388 "dma_device_type": 1 00:04:39.388 }, 00:04:39.388 { 00:04:39.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.388 "dma_device_type": 2 00:04:39.388 } 00:04:39.388 ], 00:04:39.388 "driver_specific": { 00:04:39.388 "passthru": { 00:04:39.388 "name": "Passthru0", 00:04:39.388 "base_bdev_name": "Malloc2" 00:04:39.388 } 00:04:39.388 } 00:04:39.388 } 00:04:39.388 ]' 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.388 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.647 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.647 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.647 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.647 ************************************ 00:04:39.647 END TEST rpc_daemon_integrity 00:04:39.647 ************************************ 00:04:39.647 05:22:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.647 00:04:39.647 real 0m0.361s 00:04:39.647 user 0m0.230s 00:04:39.647 sys 0m0.039s 00:04:39.647 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.647 05:22:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.647 05:22:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:39.647 05:22:19 rpc -- rpc/rpc.sh@84 -- # killprocess 59298 00:04:39.647 05:22:19 rpc -- common/autotest_common.sh@954 -- # '[' -z 59298 ']' 00:04:39.647 05:22:19 rpc -- common/autotest_common.sh@958 -- # kill -0 59298 00:04:39.647 05:22:19 rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.647 05:22:19 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.647 05:22:19 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59298 00:04:39.647 killing process with pid 59298 00:04:39.648 05:22:19 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.648 05:22:19 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.648 05:22:19 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59298' 00:04:39.648 05:22:19 rpc -- common/autotest_common.sh@973 -- # kill 59298 00:04:39.648 05:22:19 rpc -- common/autotest_common.sh@978 -- # wait 59298 00:04:41.551 ************************************ 00:04:41.551 END TEST rpc 00:04:41.551 ************************************ 00:04:41.551 00:04:41.551 real 0m4.416s 00:04:41.551 user 0m5.220s 00:04:41.551 sys 0m0.737s 00:04:41.551 05:22:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.551 05:22:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.551 05:22:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:41.551 05:22:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.551 05:22:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.551 05:22:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.551 ************************************ 00:04:41.551 START TEST skip_rpc 00:04:41.551 ************************************ 00:04:41.551 05:22:21 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:41.551 * Looking for test storage... 00:04:41.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.551 05:22:21 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.551 05:22:21 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.551 05:22:21 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.551 05:22:21 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.551 05:22:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.551 05:22:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.551 05:22:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.551 05:22:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.551 05:22:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.551 05:22:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.552 05:22:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.552 --rc genhtml_branch_coverage=1 00:04:41.552 --rc genhtml_function_coverage=1 00:04:41.552 --rc genhtml_legend=1 00:04:41.552 --rc geninfo_all_blocks=1 00:04:41.552 --rc geninfo_unexecuted_blocks=1 00:04:41.552 00:04:41.552 ' 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.552 --rc genhtml_branch_coverage=1 00:04:41.552 --rc genhtml_function_coverage=1 00:04:41.552 --rc genhtml_legend=1 00:04:41.552 --rc geninfo_all_blocks=1 00:04:41.552 --rc geninfo_unexecuted_blocks=1 00:04:41.552 00:04:41.552 ' 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.552 --rc genhtml_branch_coverage=1 00:04:41.552 --rc genhtml_function_coverage=1 00:04:41.552 --rc genhtml_legend=1 00:04:41.552 --rc geninfo_all_blocks=1 00:04:41.552 --rc geninfo_unexecuted_blocks=1 00:04:41.552 00:04:41.552 ' 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.552 --rc genhtml_branch_coverage=1 00:04:41.552 --rc genhtml_function_coverage=1 00:04:41.552 --rc genhtml_legend=1 00:04:41.552 --rc geninfo_all_blocks=1 00:04:41.552 --rc geninfo_unexecuted_blocks=1 00:04:41.552 00:04:41.552 ' 00:04:41.552 05:22:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.552 05:22:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.552 05:22:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.552 05:22:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.552 ************************************ 00:04:41.552 START TEST skip_rpc 00:04:41.552 ************************************ 00:04:41.552 05:22:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:41.810 05:22:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59516 00:04:41.810 05:22:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.810 05:22:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.810 05:22:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.810 [2024-12-16 05:22:21.914518] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:41.810 [2024-12-16 05:22:21.914697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:04:42.069 [2024-12-16 05:22:22.088536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.069 [2024-12-16 05:22:22.212545] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.327 [2024-12-16 05:22:22.421113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:47.598 05:22:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59516 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59516 ']' 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59516 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59516 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.599 killing process with pid 59516 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59516' 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59516 00:04:47.599 05:22:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59516 00:04:48.536 00:04:48.536 real 0m6.822s 00:04:48.536 user 0m6.400s 00:04:48.536 sys 0m0.323s 00:04:48.536 ************************************ 00:04:48.536 END TEST skip_rpc 00:04:48.536 ************************************ 00:04:48.536 05:22:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.536 05:22:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.536 05:22:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:48.536 05:22:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.536 05:22:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.536 05:22:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.536 ************************************ 00:04:48.536 START TEST skip_rpc_with_json 00:04:48.536 ************************************ 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:48.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59620 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59620 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59620 ']' 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.536 05:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.795 [2024-12-16 05:22:28.808758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:48.795 [2024-12-16 05:22:28.808929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59620 ] 00:04:48.795 [2024-12-16 05:22:28.979408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.055 [2024-12-16 05:22:29.063083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.055 [2024-12-16 05:22:29.250444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.622 [2024-12-16 05:22:29.742724] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:49.622 request: 00:04:49.622 { 00:04:49.622 "trtype": "tcp", 00:04:49.622 "method": "nvmf_get_transports", 00:04:49.622 "req_id": 1 00:04:49.622 } 00:04:49.622 Got JSON-RPC error response 00:04:49.622 response: 00:04:49.622 { 00:04:49.622 "code": -19, 00:04:49.622 "message": "No such device" 00:04:49.622 } 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:49.622 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.623 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.623 [2024-12-16 05:22:29.754812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.623 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.623 05:22:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:49.623 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.623 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.882 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.882 05:22:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.882 { 00:04:49.882 "subsystems": [ 00:04:49.882 { 00:04:49.882 "subsystem": "fsdev", 00:04:49.882 "config": [ 00:04:49.882 { 00:04:49.882 "method": "fsdev_set_opts", 00:04:49.882 "params": { 00:04:49.882 "fsdev_io_pool_size": 65535, 00:04:49.882 "fsdev_io_cache_size": 256 00:04:49.882 } 00:04:49.882 } 00:04:49.882 ] 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "vfio_user_target", 00:04:49.882 "config": null 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "keyring", 00:04:49.882 "config": [] 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "iobuf", 00:04:49.882 "config": [ 00:04:49.882 { 00:04:49.882 "method": "iobuf_set_options", 00:04:49.882 "params": { 00:04:49.882 "small_pool_count": 8192, 00:04:49.882 "large_pool_count": 1024, 00:04:49.882 "small_bufsize": 8192, 00:04:49.882 "large_bufsize": 135168, 00:04:49.882 "enable_numa": false 00:04:49.882 } 00:04:49.882 } 00:04:49.882 ] 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "sock", 00:04:49.882 "config": [ 00:04:49.882 { 00:04:49.882 "method": "sock_set_default_impl", 00:04:49.882 "params": { 00:04:49.882 "impl_name": "uring" 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "sock_impl_set_options", 00:04:49.882 "params": { 00:04:49.882 "impl_name": "ssl", 00:04:49.882 "recv_buf_size": 4096, 00:04:49.882 "send_buf_size": 4096, 00:04:49.882 "enable_recv_pipe": true, 00:04:49.882 "enable_quickack": false, 00:04:49.882 "enable_placement_id": 0, 00:04:49.882 "enable_zerocopy_send_server": true, 00:04:49.882 "enable_zerocopy_send_client": false, 00:04:49.882 "zerocopy_threshold": 0, 00:04:49.882 "tls_version": 0, 00:04:49.882 "enable_ktls": false 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "sock_impl_set_options", 00:04:49.882 "params": { 00:04:49.882 "impl_name": "posix", 00:04:49.882 "recv_buf_size": 2097152, 00:04:49.882 "send_buf_size": 2097152, 00:04:49.882 "enable_recv_pipe": true, 00:04:49.882 "enable_quickack": false, 00:04:49.882 "enable_placement_id": 0, 00:04:49.882 "enable_zerocopy_send_server": true, 00:04:49.882 "enable_zerocopy_send_client": false, 00:04:49.882 "zerocopy_threshold": 0, 00:04:49.882 "tls_version": 0, 00:04:49.882 "enable_ktls": false 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "sock_impl_set_options", 00:04:49.882 "params": { 00:04:49.882 "impl_name": "uring", 00:04:49.882 "recv_buf_size": 2097152, 00:04:49.882 "send_buf_size": 2097152, 00:04:49.882 "enable_recv_pipe": true, 00:04:49.882 "enable_quickack": false, 00:04:49.882 "enable_placement_id": 0, 00:04:49.882 "enable_zerocopy_send_server": false, 00:04:49.882 "enable_zerocopy_send_client": false, 00:04:49.882 "zerocopy_threshold": 0, 00:04:49.882 "tls_version": 0, 00:04:49.882 "enable_ktls": false 00:04:49.882 } 00:04:49.882 } 00:04:49.882 ] 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "vmd", 00:04:49.882 "config": [] 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "accel", 00:04:49.882 "config": [ 00:04:49.882 { 00:04:49.882 "method": "accel_set_options", 00:04:49.882 "params": { 00:04:49.882 "small_cache_size": 128, 00:04:49.882 "large_cache_size": 16, 00:04:49.882 "task_count": 2048, 00:04:49.882 "sequence_count": 2048, 00:04:49.882 "buf_count": 2048 00:04:49.882 } 00:04:49.882 } 00:04:49.882 ] 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "bdev", 00:04:49.882 "config": [ 00:04:49.882 { 00:04:49.882 "method": "bdev_set_options", 00:04:49.882 "params": { 00:04:49.882 "bdev_io_pool_size": 65535, 00:04:49.882 "bdev_io_cache_size": 256, 00:04:49.882 "bdev_auto_examine": true, 00:04:49.882 "iobuf_small_cache_size": 128, 00:04:49.882 "iobuf_large_cache_size": 16 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "bdev_raid_set_options", 00:04:49.882 "params": { 00:04:49.882 "process_window_size_kb": 1024, 00:04:49.882 "process_max_bandwidth_mb_sec": 0 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "bdev_iscsi_set_options", 00:04:49.882 "params": { 00:04:49.882 "timeout_sec": 30 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "bdev_nvme_set_options", 00:04:49.882 "params": { 00:04:49.882 "action_on_timeout": "none", 00:04:49.882 "timeout_us": 0, 00:04:49.882 "timeout_admin_us": 0, 00:04:49.882 "keep_alive_timeout_ms": 10000, 00:04:49.882 "arbitration_burst": 0, 00:04:49.882 "low_priority_weight": 0, 00:04:49.882 "medium_priority_weight": 0, 00:04:49.882 "high_priority_weight": 0, 00:04:49.882 "nvme_adminq_poll_period_us": 10000, 00:04:49.882 "nvme_ioq_poll_period_us": 0, 00:04:49.882 "io_queue_requests": 0, 00:04:49.882 "delay_cmd_submit": true, 00:04:49.882 "transport_retry_count": 4, 00:04:49.882 "bdev_retry_count": 3, 00:04:49.882 "transport_ack_timeout": 0, 00:04:49.882 "ctrlr_loss_timeout_sec": 0, 00:04:49.882 "reconnect_delay_sec": 0, 00:04:49.882 "fast_io_fail_timeout_sec": 0, 00:04:49.882 "disable_auto_failback": false, 00:04:49.882 "generate_uuids": false, 00:04:49.882 "transport_tos": 0, 00:04:49.882 "nvme_error_stat": false, 00:04:49.882 "rdma_srq_size": 0, 00:04:49.882 "io_path_stat": false, 00:04:49.882 "allow_accel_sequence": false, 00:04:49.882 "rdma_max_cq_size": 0, 00:04:49.882 "rdma_cm_event_timeout_ms": 0, 00:04:49.882 "dhchap_digests": [ 00:04:49.882 "sha256", 00:04:49.882 "sha384", 00:04:49.882 "sha512" 00:04:49.882 ], 00:04:49.882 "dhchap_dhgroups": [ 00:04:49.882 "null", 00:04:49.882 "ffdhe2048", 00:04:49.882 "ffdhe3072", 00:04:49.882 "ffdhe4096", 00:04:49.882 "ffdhe6144", 00:04:49.882 "ffdhe8192" 00:04:49.882 ], 00:04:49.882 "rdma_umr_per_io": false 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "bdev_nvme_set_hotplug", 00:04:49.882 "params": { 00:04:49.882 "period_us": 100000, 00:04:49.882 "enable": false 00:04:49.882 } 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "method": "bdev_wait_for_examine" 00:04:49.882 } 00:04:49.882 ] 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "scsi", 00:04:49.882 "config": null 00:04:49.882 }, 00:04:49.882 { 00:04:49.882 "subsystem": "scheduler", 00:04:49.882 "config": [ 00:04:49.882 { 00:04:49.882 "method": "framework_set_scheduler", 00:04:49.882 "params": { 00:04:49.882 "name": "static" 00:04:49.882 } 00:04:49.882 } 00:04:49.882 ] 00:04:49.882 }, 00:04:49.882 { 00:04:49.883 "subsystem": "vhost_scsi", 00:04:49.883 "config": [] 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "subsystem": "vhost_blk", 00:04:49.883 "config": [] 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "subsystem": "ublk", 00:04:49.883 "config": [] 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "subsystem": "nbd", 00:04:49.883 "config": [] 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "subsystem": "nvmf", 00:04:49.883 "config": [ 00:04:49.883 { 00:04:49.883 "method": "nvmf_set_config", 00:04:49.883 "params": { 00:04:49.883 "discovery_filter": "match_any", 00:04:49.883 "admin_cmd_passthru": { 00:04:49.883 "identify_ctrlr": false 00:04:49.883 }, 00:04:49.883 "dhchap_digests": [ 00:04:49.883 "sha256", 00:04:49.883 "sha384", 00:04:49.883 "sha512" 00:04:49.883 ], 00:04:49.883 "dhchap_dhgroups": [ 00:04:49.883 "null", 00:04:49.883 "ffdhe2048", 00:04:49.883 "ffdhe3072", 00:04:49.883 "ffdhe4096", 00:04:49.883 "ffdhe6144", 00:04:49.883 "ffdhe8192" 00:04:49.883 ] 00:04:49.883 } 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "method": "nvmf_set_max_subsystems", 00:04:49.883 "params": { 00:04:49.883 "max_subsystems": 1024 00:04:49.883 } 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "method": "nvmf_set_crdt", 00:04:49.883 "params": { 00:04:49.883 "crdt1": 0, 00:04:49.883 "crdt2": 0, 00:04:49.883 "crdt3": 0 00:04:49.883 } 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "method": "nvmf_create_transport", 00:04:49.883 "params": { 00:04:49.883 "trtype": "TCP", 00:04:49.883 "max_queue_depth": 128, 00:04:49.883 "max_io_qpairs_per_ctrlr": 127, 00:04:49.883 "in_capsule_data_size": 4096, 00:04:49.883 "max_io_size": 131072, 00:04:49.883 "io_unit_size": 131072, 00:04:49.883 "max_aq_depth": 128, 00:04:49.883 "num_shared_buffers": 511, 00:04:49.883 "buf_cache_size": 4294967295, 00:04:49.883 "dif_insert_or_strip": false, 00:04:49.883 "zcopy": false, 00:04:49.883 "c2h_success": true, 00:04:49.883 "sock_priority": 0, 00:04:49.883 "abort_timeout_sec": 1, 00:04:49.883 "ack_timeout": 0, 00:04:49.883 "data_wr_pool_size": 0 00:04:49.883 } 00:04:49.883 } 00:04:49.883 ] 00:04:49.883 }, 00:04:49.883 { 00:04:49.883 "subsystem": "iscsi", 00:04:49.883 "config": [ 00:04:49.883 { 00:04:49.883 "method": "iscsi_set_options", 00:04:49.883 "params": { 00:04:49.883 "node_base": "iqn.2016-06.io.spdk", 00:04:49.883 "max_sessions": 128, 00:04:49.883 "max_connections_per_session": 2, 00:04:49.883 "max_queue_depth": 64, 00:04:49.883 "default_time2wait": 2, 00:04:49.883 "default_time2retain": 20, 00:04:49.883 "first_burst_length": 8192, 00:04:49.883 "immediate_data": true, 00:04:49.883 "allow_duplicated_isid": false, 00:04:49.883 "error_recovery_level": 0, 00:04:49.883 "nop_timeout": 60, 00:04:49.883 "nop_in_interval": 30, 00:04:49.883 "disable_chap": false, 00:04:49.883 "require_chap": false, 00:04:49.883 "mutual_chap": false, 00:04:49.883 "chap_group": 0, 00:04:49.883 "max_large_datain_per_connection": 64, 00:04:49.883 "max_r2t_per_connection": 4, 00:04:49.883 "pdu_pool_size": 36864, 00:04:49.883 "immediate_data_pool_size": 16384, 00:04:49.883 "data_out_pool_size": 2048 00:04:49.883 } 00:04:49.883 } 00:04:49.883 ] 00:04:49.883 } 00:04:49.883 ] 00:04:49.883 } 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59620 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59620 ']' 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59620 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59620 00:04:49.883 killing process with pid 59620 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59620' 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59620 00:04:49.883 05:22:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59620 00:04:51.785 05:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59665 00:04:51.785 05:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.785 05:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59665 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59665 ']' 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59665 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59665 00:04:57.054 killing process with pid 59665 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59665' 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59665 00:04:57.054 05:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59665 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.432 00:04:58.432 real 0m9.899s 00:04:58.432 user 0m9.547s 00:04:58.432 sys 0m0.758s 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.432 ************************************ 00:04:58.432 END TEST skip_rpc_with_json 00:04:58.432 ************************************ 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.432 05:22:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.432 05:22:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.432 05:22:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.432 05:22:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.432 ************************************ 00:04:58.432 START TEST skip_rpc_with_delay 00:04:58.432 ************************************ 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.432 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.692 [2024-12-16 05:22:38.730958] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.692 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:58.692 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.692 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.692 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.692 00:04:58.692 real 0m0.162s 00:04:58.692 user 0m0.096s 00:04:58.692 sys 0m0.064s 00:04:58.692 ************************************ 00:04:58.692 END TEST skip_rpc_with_delay 00:04:58.692 ************************************ 00:04:58.692 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.692 05:22:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.692 05:22:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:58.692 05:22:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:58.692 05:22:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:58.692 05:22:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.692 05:22:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.692 05:22:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.692 ************************************ 00:04:58.692 START TEST exit_on_failed_rpc_init 00:04:58.692 ************************************ 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59793 00:04:58.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59793 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59793 ']' 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.692 05:22:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.951 [2024-12-16 05:22:38.976632] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:58.951 [2024-12-16 05:22:38.977000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59793 ] 00:04:58.951 [2024-12-16 05:22:39.154152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.211 [2024-12-16 05:22:39.237126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.211 [2024-12-16 05:22:39.422420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.779 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.780 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.780 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.780 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.780 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.780 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.780 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.780 05:22:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.039 [2024-12-16 05:22:40.072689] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:00.039 [2024-12-16 05:22:40.073109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59811 ] 00:05:00.039 [2024-12-16 05:22:40.257491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.298 [2024-12-16 05:22:40.368757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.298 [2024-12-16 05:22:40.368880] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.298 [2024-12-16 05:22:40.368900] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.298 [2024-12-16 05:22:40.368917] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59793 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59793 ']' 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59793 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59793 00:05:00.557 killing process with pid 59793 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59793' 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59793 00:05:00.557 05:22:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59793 00:05:02.464 00:05:02.464 real 0m3.535s 00:05:02.464 user 0m4.064s 00:05:02.464 sys 0m0.510s 00:05:02.464 05:22:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.464 ************************************ 00:05:02.464 END TEST exit_on_failed_rpc_init 00:05:02.464 ************************************ 00:05:02.464 05:22:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.464 05:22:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.464 00:05:02.464 real 0m20.811s 00:05:02.464 user 0m20.284s 00:05:02.464 sys 0m1.859s 00:05:02.464 05:22:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.464 05:22:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.464 ************************************ 00:05:02.464 END TEST skip_rpc 00:05:02.464 ************************************ 00:05:02.464 05:22:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.464 05:22:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.464 05:22:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.464 05:22:42 -- common/autotest_common.sh@10 -- # set +x 00:05:02.464 ************************************ 00:05:02.464 START TEST rpc_client 00:05:02.464 ************************************ 00:05:02.464 05:22:42 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.464 * Looking for test storage... 00:05:02.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:02.464 05:22:42 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.464 05:22:42 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.464 05:22:42 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.464 05:22:42 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.464 05:22:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:02.465 05:22:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.465 05:22:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.465 05:22:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.465 05:22:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:02.465 05:22:42 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.465 05:22:42 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.465 --rc genhtml_branch_coverage=1 00:05:02.465 --rc genhtml_function_coverage=1 00:05:02.465 --rc genhtml_legend=1 00:05:02.465 --rc geninfo_all_blocks=1 00:05:02.465 --rc geninfo_unexecuted_blocks=1 00:05:02.465 00:05:02.465 ' 00:05:02.465 05:22:42 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.465 --rc genhtml_branch_coverage=1 00:05:02.465 --rc genhtml_function_coverage=1 00:05:02.465 --rc genhtml_legend=1 00:05:02.465 --rc geninfo_all_blocks=1 00:05:02.465 --rc geninfo_unexecuted_blocks=1 00:05:02.465 00:05:02.465 ' 00:05:02.465 05:22:42 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.465 --rc genhtml_branch_coverage=1 00:05:02.465 --rc genhtml_function_coverage=1 00:05:02.465 --rc genhtml_legend=1 00:05:02.465 --rc geninfo_all_blocks=1 00:05:02.465 --rc geninfo_unexecuted_blocks=1 00:05:02.465 00:05:02.465 ' 00:05:02.465 05:22:42 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.465 --rc genhtml_branch_coverage=1 00:05:02.465 --rc genhtml_function_coverage=1 00:05:02.465 --rc genhtml_legend=1 00:05:02.465 --rc geninfo_all_blocks=1 00:05:02.465 --rc geninfo_unexecuted_blocks=1 00:05:02.465 00:05:02.465 ' 00:05:02.465 05:22:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:02.465 OK 00:05:02.725 05:22:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.725 00:05:02.725 real 0m0.248s 00:05:02.725 user 0m0.145s 00:05:02.725 sys 0m0.115s 00:05:02.725 05:22:42 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.725 05:22:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.725 ************************************ 00:05:02.725 END TEST rpc_client 00:05:02.725 ************************************ 00:05:02.725 05:22:42 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.725 05:22:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.725 05:22:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.725 05:22:42 -- common/autotest_common.sh@10 -- # set +x 00:05:02.725 ************************************ 00:05:02.725 START TEST json_config 00:05:02.725 ************************************ 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.725 05:22:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.725 05:22:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.725 05:22:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.725 05:22:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.725 05:22:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.725 05:22:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.725 05:22:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.725 05:22:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:02.725 05:22:42 json_config -- scripts/common.sh@345 -- # : 1 00:05:02.725 05:22:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.725 05:22:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.725 05:22:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:02.725 05:22:42 json_config -- scripts/common.sh@353 -- # local d=1 00:05:02.725 05:22:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.725 05:22:42 json_config -- scripts/common.sh@355 -- # echo 1 00:05:02.725 05:22:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.725 05:22:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@353 -- # local d=2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.725 05:22:42 json_config -- scripts/common.sh@355 -- # echo 2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.725 05:22:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.725 05:22:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.725 05:22:42 json_config -- scripts/common.sh@368 -- # return 0 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.725 --rc genhtml_branch_coverage=1 00:05:02.725 --rc genhtml_function_coverage=1 00:05:02.725 --rc genhtml_legend=1 00:05:02.725 --rc geninfo_all_blocks=1 00:05:02.725 --rc geninfo_unexecuted_blocks=1 00:05:02.725 00:05:02.725 ' 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.725 --rc genhtml_branch_coverage=1 00:05:02.725 --rc genhtml_function_coverage=1 00:05:02.725 --rc genhtml_legend=1 00:05:02.725 --rc geninfo_all_blocks=1 00:05:02.725 --rc geninfo_unexecuted_blocks=1 00:05:02.725 00:05:02.725 ' 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.725 --rc genhtml_branch_coverage=1 00:05:02.725 --rc genhtml_function_coverage=1 00:05:02.725 --rc genhtml_legend=1 00:05:02.725 --rc geninfo_all_blocks=1 00:05:02.725 --rc geninfo_unexecuted_blocks=1 00:05:02.725 00:05:02.725 ' 00:05:02.725 05:22:42 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.725 --rc genhtml_branch_coverage=1 00:05:02.725 --rc genhtml_function_coverage=1 00:05:02.725 --rc genhtml_legend=1 00:05:02.725 --rc geninfo_all_blocks=1 00:05:02.725 --rc geninfo_unexecuted_blocks=1 00:05:02.725 00:05:02.725 ' 00:05:02.725 05:22:42 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.725 05:22:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.725 05:22:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.725 05:22:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.725 05:22:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.725 05:22:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.725 05:22:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.725 05:22:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.725 05:22:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.725 05:22:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@51 -- # : 0 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.725 05:22:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.726 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.726 05:22:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.726 05:22:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.726 05:22:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.726 INFO: JSON configuration test init 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:02.726 05:22:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:02.726 05:22:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.726 05:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.985 05:22:42 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.985 05:22:42 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:02.985 05:22:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.985 05:22:42 json_config -- json_config/common.sh@10 -- # shift 00:05:02.985 05:22:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.985 05:22:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.985 05:22:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.985 05:22:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.985 05:22:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.985 05:22:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59970 00:05:02.985 05:22:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.985 Waiting for target to run... 00:05:02.985 05:22:42 json_config -- json_config/common.sh@25 -- # waitforlisten 59970 /var/tmp/spdk_tgt.sock 00:05:02.985 05:22:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 59970 ']' 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.985 05:22:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.985 [2024-12-16 05:22:43.119906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:02.985 [2024-12-16 05:22:43.120317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59970 ] 00:05:03.244 [2024-12-16 05:22:43.469738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.503 [2024-12-16 05:22:43.546774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.071 00:05:04.071 05:22:44 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.071 05:22:44 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:04.071 05:22:44 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.071 05:22:44 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:04.071 05:22:44 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:04.071 05:22:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.071 05:22:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.071 05:22:44 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:04.071 05:22:44 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:04.071 05:22:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.071 05:22:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.071 05:22:44 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:04.071 05:22:44 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:04.072 05:22:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.662 [2024-12-16 05:22:44.593056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.934 05:22:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.934 05:22:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:04.934 05:22:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:04.934 05:22:45 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@54 -- # sort 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:05.192 05:22:45 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:05.192 05:22:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.192 05:22:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.450 05:22:45 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:05.450 05:22:45 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:05.450 05:22:45 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:05.450 05:22:45 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:05.450 05:22:45 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:05.450 05:22:45 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:05.450 05:22:45 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:05.450 05:22:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.451 05:22:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.451 05:22:45 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:05.451 05:22:45 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:05.451 05:22:45 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:05.451 05:22:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.451 05:22:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:05.709 MallocForNvmf0 00:05:05.709 05:22:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.709 05:22:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:05.969 MallocForNvmf1 00:05:05.969 05:22:46 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:05.969 05:22:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.227 [2024-12-16 05:22:46.277872] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.227 05:22:46 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.227 05:22:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.486 05:22:46 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.486 05:22:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:06.486 05:22:46 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.486 05:22:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:06.745 05:22:46 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:06.745 05:22:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.004 [2024-12-16 05:22:47.162631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.004 05:22:47 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:07.004 05:22:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.004 05:22:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.004 05:22:47 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:07.004 05:22:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.004 05:22:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.004 05:22:47 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:07.004 05:22:47 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.004 05:22:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.572 MallocBdevForConfigChangeCheck 00:05:07.572 05:22:47 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:07.572 05:22:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.572 05:22:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.572 05:22:47 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:07.572 05:22:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.831 05:22:48 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:07.831 INFO: shutting down applications... 00:05:07.831 05:22:48 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:07.831 05:22:48 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:07.831 05:22:48 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:07.831 05:22:48 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:08.090 Calling clear_iscsi_subsystem 00:05:08.090 Calling clear_nvmf_subsystem 00:05:08.090 Calling clear_nbd_subsystem 00:05:08.090 Calling clear_ublk_subsystem 00:05:08.090 Calling clear_vhost_blk_subsystem 00:05:08.090 Calling clear_vhost_scsi_subsystem 00:05:08.090 Calling clear_bdev_subsystem 00:05:08.349 05:22:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:08.349 05:22:48 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:08.349 05:22:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:08.349 05:22:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.349 05:22:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:08.349 05:22:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.609 05:22:48 json_config -- json_config/json_config.sh@352 -- # break 00:05:08.609 05:22:48 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:08.609 05:22:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:08.609 05:22:48 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.609 05:22:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.609 05:22:48 json_config -- json_config/common.sh@35 -- # [[ -n 59970 ]] 00:05:08.609 05:22:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59970 00:05:08.609 05:22:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.609 05:22:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.609 05:22:48 json_config -- json_config/common.sh@41 -- # kill -0 59970 00:05:08.609 05:22:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.177 05:22:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.177 05:22:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.177 05:22:49 json_config -- json_config/common.sh@41 -- # kill -0 59970 00:05:09.177 05:22:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.745 05:22:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.745 05:22:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.745 05:22:49 json_config -- json_config/common.sh@41 -- # kill -0 59970 00:05:09.745 05:22:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.745 05:22:49 json_config -- json_config/common.sh@43 -- # break 00:05:09.745 SPDK target shutdown done 00:05:09.745 INFO: relaunching applications... 00:05:09.745 05:22:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.745 05:22:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.745 05:22:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:09.745 05:22:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.745 05:22:49 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.745 05:22:49 json_config -- json_config/common.sh@10 -- # shift 00:05:09.745 05:22:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.745 05:22:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.745 05:22:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.745 05:22:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.745 05:22:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.745 05:22:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60173 00:05:09.745 05:22:49 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.745 Waiting for target to run... 00:05:09.745 05:22:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.745 05:22:49 json_config -- json_config/common.sh@25 -- # waitforlisten 60173 /var/tmp/spdk_tgt.sock 00:05:09.745 05:22:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 60173 ']' 00:05:09.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.745 05:22:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.745 05:22:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.745 05:22:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.745 05:22:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.745 05:22:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.745 [2024-12-16 05:22:49.878033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:09.745 [2024-12-16 05:22:49.878162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60173 ] 00:05:10.004 [2024-12-16 05:22:50.196136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.263 [2024-12-16 05:22:50.272504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.522 [2024-12-16 05:22:50.562102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.089 [2024-12-16 05:22:51.113897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.089 [2024-12-16 05:22:51.146107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.089 00:05:11.089 INFO: Checking if target configuration is the same... 00:05:11.089 05:22:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.089 05:22:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:11.089 05:22:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.089 05:22:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:11.089 05:22:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:11.089 05:22:51 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.089 05:22:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:11.089 05:22:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.089 + '[' 2 -ne 2 ']' 00:05:11.089 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:11.089 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:11.089 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:11.090 +++ basename /dev/fd/62 00:05:11.090 ++ mktemp /tmp/62.XXX 00:05:11.090 + tmp_file_1=/tmp/62.NBz 00:05:11.090 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.090 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.090 + tmp_file_2=/tmp/spdk_tgt_config.json.rZF 00:05:11.090 + ret=0 00:05:11.090 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.658 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.658 + diff -u /tmp/62.NBz /tmp/spdk_tgt_config.json.rZF 00:05:11.658 INFO: JSON config files are the same 00:05:11.658 + echo 'INFO: JSON config files are the same' 00:05:11.658 + rm /tmp/62.NBz /tmp/spdk_tgt_config.json.rZF 00:05:11.658 + exit 0 00:05:11.658 INFO: changing configuration and checking if this can be detected... 00:05:11.658 05:22:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:11.658 05:22:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.658 05:22:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.658 05:22:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.917 05:22:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:11.917 05:22:51 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.917 05:22:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.917 + '[' 2 -ne 2 ']' 00:05:11.917 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:11.917 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:11.917 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:11.917 +++ basename /dev/fd/62 00:05:11.917 ++ mktemp /tmp/62.XXX 00:05:11.917 + tmp_file_1=/tmp/62.moU 00:05:11.917 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.917 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.917 + tmp_file_2=/tmp/spdk_tgt_config.json.8YN 00:05:11.917 + ret=0 00:05:11.917 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:12.175 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:12.434 + diff -u /tmp/62.moU /tmp/spdk_tgt_config.json.8YN 00:05:12.434 + ret=1 00:05:12.434 + echo '=== Start of file: /tmp/62.moU ===' 00:05:12.434 + cat /tmp/62.moU 00:05:12.434 + echo '=== End of file: /tmp/62.moU ===' 00:05:12.434 + echo '' 00:05:12.434 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8YN ===' 00:05:12.434 + cat /tmp/spdk_tgt_config.json.8YN 00:05:12.435 + echo '=== End of file: /tmp/spdk_tgt_config.json.8YN ===' 00:05:12.435 + echo '' 00:05:12.435 + rm /tmp/62.moU /tmp/spdk_tgt_config.json.8YN 00:05:12.435 + exit 1 00:05:12.435 INFO: configuration change detected. 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 60173 ]] 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.435 05:22:52 json_config -- json_config/json_config.sh@330 -- # killprocess 60173 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 60173 ']' 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@958 -- # kill -0 60173 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@959 -- # uname 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60173 00:05:12.435 killing process with pid 60173 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60173' 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@973 -- # kill 60173 00:05:12.435 05:22:52 json_config -- common/autotest_common.sh@978 -- # wait 60173 00:05:13.372 05:22:53 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:13.372 05:22:53 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:13.372 05:22:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.372 05:22:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.372 INFO: Success 00:05:13.372 05:22:53 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:13.372 05:22:53 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:13.372 ************************************ 00:05:13.372 END TEST json_config 00:05:13.372 ************************************ 00:05:13.372 00:05:13.372 real 0m10.560s 00:05:13.372 user 0m14.191s 00:05:13.372 sys 0m1.741s 00:05:13.372 05:22:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.372 05:22:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.372 05:22:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:13.372 05:22:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.372 05:22:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.372 05:22:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.372 ************************************ 00:05:13.372 START TEST json_config_extra_key 00:05:13.372 ************************************ 00:05:13.372 05:22:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:13.372 05:22:53 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.372 05:22:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.372 05:22:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.372 05:22:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.372 05:22:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.373 --rc genhtml_branch_coverage=1 00:05:13.373 --rc genhtml_function_coverage=1 00:05:13.373 --rc genhtml_legend=1 00:05:13.373 --rc geninfo_all_blocks=1 00:05:13.373 --rc geninfo_unexecuted_blocks=1 00:05:13.373 00:05:13.373 ' 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.373 --rc genhtml_branch_coverage=1 00:05:13.373 --rc genhtml_function_coverage=1 00:05:13.373 --rc genhtml_legend=1 00:05:13.373 --rc geninfo_all_blocks=1 00:05:13.373 --rc geninfo_unexecuted_blocks=1 00:05:13.373 00:05:13.373 ' 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.373 --rc genhtml_branch_coverage=1 00:05:13.373 --rc genhtml_function_coverage=1 00:05:13.373 --rc genhtml_legend=1 00:05:13.373 --rc geninfo_all_blocks=1 00:05:13.373 --rc geninfo_unexecuted_blocks=1 00:05:13.373 00:05:13.373 ' 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.373 --rc genhtml_branch_coverage=1 00:05:13.373 --rc genhtml_function_coverage=1 00:05:13.373 --rc genhtml_legend=1 00:05:13.373 --rc geninfo_all_blocks=1 00:05:13.373 --rc geninfo_unexecuted_blocks=1 00:05:13.373 00:05:13.373 ' 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.373 05:22:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.373 05:22:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.373 05:22:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.373 05:22:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.373 05:22:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.373 05:22:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.373 05:22:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.373 INFO: launching applications... 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.373 05:22:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60339 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.373 Waiting for target to run... 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:13.373 05:22:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60339 /var/tmp/spdk_tgt.sock 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 60339 ']' 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.373 05:22:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.632 [2024-12-16 05:22:53.720017] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:13.632 [2024-12-16 05:22:53.720447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:05:13.891 [2024-12-16 05:22:54.053966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.891 [2024-12-16 05:22:54.129722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.148 [2024-12-16 05:22:54.311669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.715 05:22:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.715 00:05:14.715 INFO: shutting down applications... 00:05:14.715 05:22:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:14.715 05:22:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.715 05:22:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.715 05:22:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.715 05:22:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.715 05:22:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.715 05:22:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60339 ]] 00:05:14.715 05:22:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60339 00:05:14.715 05:22:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.715 05:22:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.716 05:22:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60339 00:05:14.716 05:22:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.975 05:22:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.975 05:22:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.975 05:22:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60339 00:05:14.975 05:22:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.542 05:22:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.542 05:22:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.542 05:22:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60339 00:05:15.542 05:22:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.110 05:22:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.110 05:22:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.110 05:22:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60339 00:05:16.110 05:22:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.678 05:22:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.678 05:22:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.678 05:22:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60339 00:05:16.678 05:22:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.678 05:22:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:16.678 05:22:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.678 SPDK target shutdown done 00:05:16.678 05:22:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.678 Success 00:05:16.678 05:22:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:16.678 ************************************ 00:05:16.678 END TEST json_config_extra_key 00:05:16.678 ************************************ 00:05:16.678 00:05:16.678 real 0m3.326s 00:05:16.678 user 0m3.220s 00:05:16.678 sys 0m0.480s 00:05:16.678 05:22:56 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.678 05:22:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.678 05:22:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.678 05:22:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.678 05:22:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.678 05:22:56 -- common/autotest_common.sh@10 -- # set +x 00:05:16.678 ************************************ 00:05:16.678 START TEST alias_rpc 00:05:16.678 ************************************ 00:05:16.678 05:22:56 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.678 * Looking for test storage... 00:05:16.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:16.678 05:22:56 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.678 05:22:56 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.679 05:22:56 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.938 05:22:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:16.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.938 --rc genhtml_branch_coverage=1 00:05:16.938 --rc genhtml_function_coverage=1 00:05:16.938 --rc genhtml_legend=1 00:05:16.938 --rc geninfo_all_blocks=1 00:05:16.938 --rc geninfo_unexecuted_blocks=1 00:05:16.938 00:05:16.938 ' 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:16.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.938 --rc genhtml_branch_coverage=1 00:05:16.938 --rc genhtml_function_coverage=1 00:05:16.938 --rc genhtml_legend=1 00:05:16.938 --rc geninfo_all_blocks=1 00:05:16.938 --rc geninfo_unexecuted_blocks=1 00:05:16.938 00:05:16.938 ' 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:16.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.938 --rc genhtml_branch_coverage=1 00:05:16.938 --rc genhtml_function_coverage=1 00:05:16.938 --rc genhtml_legend=1 00:05:16.938 --rc geninfo_all_blocks=1 00:05:16.938 --rc geninfo_unexecuted_blocks=1 00:05:16.938 00:05:16.938 ' 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:16.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.938 --rc genhtml_branch_coverage=1 00:05:16.938 --rc genhtml_function_coverage=1 00:05:16.938 --rc genhtml_legend=1 00:05:16.938 --rc geninfo_all_blocks=1 00:05:16.938 --rc geninfo_unexecuted_blocks=1 00:05:16.938 00:05:16.938 ' 00:05:16.938 05:22:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.938 05:22:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60432 00:05:16.938 05:22:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.938 05:22:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60432 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 60432 ']' 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.938 05:22:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.938 [2024-12-16 05:22:57.088235] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:16.938 [2024-12-16 05:22:57.089544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60432 ] 00:05:17.197 [2024-12-16 05:22:57.278417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.197 [2024-12-16 05:22:57.404255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.456 [2024-12-16 05:22:57.606073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.024 05:22:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.024 05:22:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.024 05:22:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:18.283 05:22:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60432 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 60432 ']' 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 60432 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60432 00:05:18.283 killing process with pid 60432 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60432' 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 60432 00:05:18.283 05:22:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 60432 00:05:20.188 ************************************ 00:05:20.188 END TEST alias_rpc 00:05:20.188 ************************************ 00:05:20.188 00:05:20.188 real 0m3.317s 00:05:20.188 user 0m3.556s 00:05:20.188 sys 0m0.488s 00:05:20.188 05:23:00 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.188 05:23:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.188 05:23:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:20.188 05:23:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:20.188 05:23:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.188 05:23:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.188 05:23:00 -- common/autotest_common.sh@10 -- # set +x 00:05:20.188 ************************************ 00:05:20.188 START TEST spdkcli_tcp 00:05:20.188 ************************************ 00:05:20.188 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:20.188 * Looking for test storage... 00:05:20.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:20.188 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:20.188 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:20.188 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:20.188 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:20.188 05:23:00 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.188 05:23:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.188 05:23:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.188 05:23:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.188 05:23:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.189 05:23:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.189 --rc genhtml_branch_coverage=1 00:05:20.189 --rc genhtml_function_coverage=1 00:05:20.189 --rc genhtml_legend=1 00:05:20.189 --rc geninfo_all_blocks=1 00:05:20.189 --rc geninfo_unexecuted_blocks=1 00:05:20.189 00:05:20.189 ' 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.189 --rc genhtml_branch_coverage=1 00:05:20.189 --rc genhtml_function_coverage=1 00:05:20.189 --rc genhtml_legend=1 00:05:20.189 --rc geninfo_all_blocks=1 00:05:20.189 --rc geninfo_unexecuted_blocks=1 00:05:20.189 00:05:20.189 ' 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.189 --rc genhtml_branch_coverage=1 00:05:20.189 --rc genhtml_function_coverage=1 00:05:20.189 --rc genhtml_legend=1 00:05:20.189 --rc geninfo_all_blocks=1 00:05:20.189 --rc geninfo_unexecuted_blocks=1 00:05:20.189 00:05:20.189 ' 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:20.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.189 --rc genhtml_branch_coverage=1 00:05:20.189 --rc genhtml_function_coverage=1 00:05:20.189 --rc genhtml_legend=1 00:05:20.189 --rc geninfo_all_blocks=1 00:05:20.189 --rc geninfo_unexecuted_blocks=1 00:05:20.189 00:05:20.189 ' 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60539 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60539 00:05:20.189 05:23:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 60539 ']' 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.189 05:23:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.448 [2024-12-16 05:23:00.461944] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:20.448 [2024-12-16 05:23:00.462388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60539 ] 00:05:20.448 [2024-12-16 05:23:00.642208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.707 [2024-12-16 05:23:00.725929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.707 [2024-12-16 05:23:00.725941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.707 [2024-12-16 05:23:00.923564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.275 05:23:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.275 05:23:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:21.275 05:23:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:21.275 05:23:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60556 00:05:21.275 05:23:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:21.535 [ 00:05:21.535 "bdev_malloc_delete", 00:05:21.535 "bdev_malloc_create", 00:05:21.535 "bdev_null_resize", 00:05:21.535 "bdev_null_delete", 00:05:21.535 "bdev_null_create", 00:05:21.535 "bdev_nvme_cuse_unregister", 00:05:21.535 "bdev_nvme_cuse_register", 00:05:21.535 "bdev_opal_new_user", 00:05:21.535 "bdev_opal_set_lock_state", 00:05:21.535 "bdev_opal_delete", 00:05:21.535 "bdev_opal_get_info", 00:05:21.535 "bdev_opal_create", 00:05:21.535 "bdev_nvme_opal_revert", 00:05:21.535 "bdev_nvme_opal_init", 00:05:21.535 "bdev_nvme_send_cmd", 00:05:21.535 "bdev_nvme_set_keys", 00:05:21.535 "bdev_nvme_get_path_iostat", 00:05:21.535 "bdev_nvme_get_mdns_discovery_info", 00:05:21.535 "bdev_nvme_stop_mdns_discovery", 00:05:21.535 "bdev_nvme_start_mdns_discovery", 00:05:21.535 "bdev_nvme_set_multipath_policy", 00:05:21.535 "bdev_nvme_set_preferred_path", 00:05:21.535 "bdev_nvme_get_io_paths", 00:05:21.535 "bdev_nvme_remove_error_injection", 00:05:21.535 "bdev_nvme_add_error_injection", 00:05:21.535 "bdev_nvme_get_discovery_info", 00:05:21.535 "bdev_nvme_stop_discovery", 00:05:21.535 "bdev_nvme_start_discovery", 00:05:21.535 "bdev_nvme_get_controller_health_info", 00:05:21.535 "bdev_nvme_disable_controller", 00:05:21.535 "bdev_nvme_enable_controller", 00:05:21.535 "bdev_nvme_reset_controller", 00:05:21.535 "bdev_nvme_get_transport_statistics", 00:05:21.535 "bdev_nvme_apply_firmware", 00:05:21.535 "bdev_nvme_detach_controller", 00:05:21.535 "bdev_nvme_get_controllers", 00:05:21.535 "bdev_nvme_attach_controller", 00:05:21.535 "bdev_nvme_set_hotplug", 00:05:21.535 "bdev_nvme_set_options", 00:05:21.535 "bdev_passthru_delete", 00:05:21.535 "bdev_passthru_create", 00:05:21.535 "bdev_lvol_set_parent_bdev", 00:05:21.535 "bdev_lvol_set_parent", 00:05:21.535 "bdev_lvol_check_shallow_copy", 00:05:21.535 "bdev_lvol_start_shallow_copy", 00:05:21.535 "bdev_lvol_grow_lvstore", 00:05:21.535 "bdev_lvol_get_lvols", 00:05:21.535 "bdev_lvol_get_lvstores", 00:05:21.535 "bdev_lvol_delete", 00:05:21.535 "bdev_lvol_set_read_only", 00:05:21.535 "bdev_lvol_resize", 00:05:21.535 "bdev_lvol_decouple_parent", 00:05:21.535 "bdev_lvol_inflate", 00:05:21.535 "bdev_lvol_rename", 00:05:21.535 "bdev_lvol_clone_bdev", 00:05:21.535 "bdev_lvol_clone", 00:05:21.535 "bdev_lvol_snapshot", 00:05:21.535 "bdev_lvol_create", 00:05:21.535 "bdev_lvol_delete_lvstore", 00:05:21.535 "bdev_lvol_rename_lvstore", 00:05:21.535 "bdev_lvol_create_lvstore", 00:05:21.535 "bdev_raid_set_options", 00:05:21.535 "bdev_raid_remove_base_bdev", 00:05:21.535 "bdev_raid_add_base_bdev", 00:05:21.535 "bdev_raid_delete", 00:05:21.535 "bdev_raid_create", 00:05:21.535 "bdev_raid_get_bdevs", 00:05:21.535 "bdev_error_inject_error", 00:05:21.535 "bdev_error_delete", 00:05:21.535 "bdev_error_create", 00:05:21.535 "bdev_split_delete", 00:05:21.535 "bdev_split_create", 00:05:21.535 "bdev_delay_delete", 00:05:21.535 "bdev_delay_create", 00:05:21.535 "bdev_delay_update_latency", 00:05:21.535 "bdev_zone_block_delete", 00:05:21.535 "bdev_zone_block_create", 00:05:21.535 "blobfs_create", 00:05:21.535 "blobfs_detect", 00:05:21.535 "blobfs_set_cache_size", 00:05:21.535 "bdev_aio_delete", 00:05:21.535 "bdev_aio_rescan", 00:05:21.535 "bdev_aio_create", 00:05:21.535 "bdev_ftl_set_property", 00:05:21.535 "bdev_ftl_get_properties", 00:05:21.535 "bdev_ftl_get_stats", 00:05:21.535 "bdev_ftl_unmap", 00:05:21.535 "bdev_ftl_unload", 00:05:21.535 "bdev_ftl_delete", 00:05:21.535 "bdev_ftl_load", 00:05:21.535 "bdev_ftl_create", 00:05:21.535 "bdev_virtio_attach_controller", 00:05:21.535 "bdev_virtio_scsi_get_devices", 00:05:21.535 "bdev_virtio_detach_controller", 00:05:21.535 "bdev_virtio_blk_set_hotplug", 00:05:21.535 "bdev_iscsi_delete", 00:05:21.535 "bdev_iscsi_create", 00:05:21.535 "bdev_iscsi_set_options", 00:05:21.535 "bdev_uring_delete", 00:05:21.535 "bdev_uring_rescan", 00:05:21.535 "bdev_uring_create", 00:05:21.535 "accel_error_inject_error", 00:05:21.535 "ioat_scan_accel_module", 00:05:21.535 "dsa_scan_accel_module", 00:05:21.535 "iaa_scan_accel_module", 00:05:21.535 "vfu_virtio_create_fs_endpoint", 00:05:21.535 "vfu_virtio_create_scsi_endpoint", 00:05:21.535 "vfu_virtio_scsi_remove_target", 00:05:21.535 "vfu_virtio_scsi_add_target", 00:05:21.535 "vfu_virtio_create_blk_endpoint", 00:05:21.535 "vfu_virtio_delete_endpoint", 00:05:21.535 "keyring_file_remove_key", 00:05:21.535 "keyring_file_add_key", 00:05:21.535 "keyring_linux_set_options", 00:05:21.535 "fsdev_aio_delete", 00:05:21.535 "fsdev_aio_create", 00:05:21.535 "iscsi_get_histogram", 00:05:21.535 "iscsi_enable_histogram", 00:05:21.535 "iscsi_set_options", 00:05:21.535 "iscsi_get_auth_groups", 00:05:21.535 "iscsi_auth_group_remove_secret", 00:05:21.535 "iscsi_auth_group_add_secret", 00:05:21.535 "iscsi_delete_auth_group", 00:05:21.535 "iscsi_create_auth_group", 00:05:21.535 "iscsi_set_discovery_auth", 00:05:21.535 "iscsi_get_options", 00:05:21.535 "iscsi_target_node_request_logout", 00:05:21.535 "iscsi_target_node_set_redirect", 00:05:21.535 "iscsi_target_node_set_auth", 00:05:21.535 "iscsi_target_node_add_lun", 00:05:21.535 "iscsi_get_stats", 00:05:21.535 "iscsi_get_connections", 00:05:21.535 "iscsi_portal_group_set_auth", 00:05:21.535 "iscsi_start_portal_group", 00:05:21.535 "iscsi_delete_portal_group", 00:05:21.535 "iscsi_create_portal_group", 00:05:21.535 "iscsi_get_portal_groups", 00:05:21.535 "iscsi_delete_target_node", 00:05:21.535 "iscsi_target_node_remove_pg_ig_maps", 00:05:21.535 "iscsi_target_node_add_pg_ig_maps", 00:05:21.535 "iscsi_create_target_node", 00:05:21.535 "iscsi_get_target_nodes", 00:05:21.536 "iscsi_delete_initiator_group", 00:05:21.536 "iscsi_initiator_group_remove_initiators", 00:05:21.536 "iscsi_initiator_group_add_initiators", 00:05:21.536 "iscsi_create_initiator_group", 00:05:21.536 "iscsi_get_initiator_groups", 00:05:21.536 "nvmf_set_crdt", 00:05:21.536 "nvmf_set_config", 00:05:21.536 "nvmf_set_max_subsystems", 00:05:21.536 "nvmf_stop_mdns_prr", 00:05:21.536 "nvmf_publish_mdns_prr", 00:05:21.536 "nvmf_subsystem_get_listeners", 00:05:21.536 "nvmf_subsystem_get_qpairs", 00:05:21.536 "nvmf_subsystem_get_controllers", 00:05:21.536 "nvmf_get_stats", 00:05:21.536 "nvmf_get_transports", 00:05:21.536 "nvmf_create_transport", 00:05:21.536 "nvmf_get_targets", 00:05:21.536 "nvmf_delete_target", 00:05:21.536 "nvmf_create_target", 00:05:21.536 "nvmf_subsystem_allow_any_host", 00:05:21.536 "nvmf_subsystem_set_keys", 00:05:21.536 "nvmf_subsystem_remove_host", 00:05:21.536 "nvmf_subsystem_add_host", 00:05:21.536 "nvmf_ns_remove_host", 00:05:21.536 "nvmf_ns_add_host", 00:05:21.536 "nvmf_subsystem_remove_ns", 00:05:21.536 "nvmf_subsystem_set_ns_ana_group", 00:05:21.536 "nvmf_subsystem_add_ns", 00:05:21.536 "nvmf_subsystem_listener_set_ana_state", 00:05:21.536 "nvmf_discovery_get_referrals", 00:05:21.536 "nvmf_discovery_remove_referral", 00:05:21.536 "nvmf_discovery_add_referral", 00:05:21.536 "nvmf_subsystem_remove_listener", 00:05:21.536 "nvmf_subsystem_add_listener", 00:05:21.536 "nvmf_delete_subsystem", 00:05:21.536 "nvmf_create_subsystem", 00:05:21.536 "nvmf_get_subsystems", 00:05:21.536 "env_dpdk_get_mem_stats", 00:05:21.536 "nbd_get_disks", 00:05:21.536 "nbd_stop_disk", 00:05:21.536 "nbd_start_disk", 00:05:21.536 "ublk_recover_disk", 00:05:21.536 "ublk_get_disks", 00:05:21.536 "ublk_stop_disk", 00:05:21.536 "ublk_start_disk", 00:05:21.536 "ublk_destroy_target", 00:05:21.536 "ublk_create_target", 00:05:21.536 "virtio_blk_create_transport", 00:05:21.536 "virtio_blk_get_transports", 00:05:21.536 "vhost_controller_set_coalescing", 00:05:21.536 "vhost_get_controllers", 00:05:21.536 "vhost_delete_controller", 00:05:21.536 "vhost_create_blk_controller", 00:05:21.536 "vhost_scsi_controller_remove_target", 00:05:21.536 "vhost_scsi_controller_add_target", 00:05:21.536 "vhost_start_scsi_controller", 00:05:21.536 "vhost_create_scsi_controller", 00:05:21.536 "thread_set_cpumask", 00:05:21.536 "scheduler_set_options", 00:05:21.536 "framework_get_governor", 00:05:21.536 "framework_get_scheduler", 00:05:21.536 "framework_set_scheduler", 00:05:21.536 "framework_get_reactors", 00:05:21.536 "thread_get_io_channels", 00:05:21.536 "thread_get_pollers", 00:05:21.536 "thread_get_stats", 00:05:21.536 "framework_monitor_context_switch", 00:05:21.536 "spdk_kill_instance", 00:05:21.536 "log_enable_timestamps", 00:05:21.536 "log_get_flags", 00:05:21.536 "log_clear_flag", 00:05:21.536 "log_set_flag", 00:05:21.536 "log_get_level", 00:05:21.536 "log_set_level", 00:05:21.536 "log_get_print_level", 00:05:21.536 "log_set_print_level", 00:05:21.536 "framework_enable_cpumask_locks", 00:05:21.536 "framework_disable_cpumask_locks", 00:05:21.536 "framework_wait_init", 00:05:21.536 "framework_start_init", 00:05:21.536 "scsi_get_devices", 00:05:21.536 "bdev_get_histogram", 00:05:21.536 "bdev_enable_histogram", 00:05:21.536 "bdev_set_qos_limit", 00:05:21.536 "bdev_set_qd_sampling_period", 00:05:21.536 "bdev_get_bdevs", 00:05:21.536 "bdev_reset_iostat", 00:05:21.536 "bdev_get_iostat", 00:05:21.536 "bdev_examine", 00:05:21.536 "bdev_wait_for_examine", 00:05:21.536 "bdev_set_options", 00:05:21.536 "accel_get_stats", 00:05:21.536 "accel_set_options", 00:05:21.536 "accel_set_driver", 00:05:21.536 "accel_crypto_key_destroy", 00:05:21.536 "accel_crypto_keys_get", 00:05:21.536 "accel_crypto_key_create", 00:05:21.536 "accel_assign_opc", 00:05:21.536 "accel_get_module_info", 00:05:21.536 "accel_get_opc_assignments", 00:05:21.536 "vmd_rescan", 00:05:21.536 "vmd_remove_device", 00:05:21.536 "vmd_enable", 00:05:21.536 "sock_get_default_impl", 00:05:21.536 "sock_set_default_impl", 00:05:21.536 "sock_impl_set_options", 00:05:21.536 "sock_impl_get_options", 00:05:21.536 "iobuf_get_stats", 00:05:21.536 "iobuf_set_options", 00:05:21.536 "keyring_get_keys", 00:05:21.536 "vfu_tgt_set_base_path", 00:05:21.536 "framework_get_pci_devices", 00:05:21.536 "framework_get_config", 00:05:21.536 "framework_get_subsystems", 00:05:21.536 "fsdev_set_opts", 00:05:21.536 "fsdev_get_opts", 00:05:21.536 "trace_get_info", 00:05:21.536 "trace_get_tpoint_group_mask", 00:05:21.536 "trace_disable_tpoint_group", 00:05:21.536 "trace_enable_tpoint_group", 00:05:21.536 "trace_clear_tpoint_mask", 00:05:21.536 "trace_set_tpoint_mask", 00:05:21.536 "notify_get_notifications", 00:05:21.536 "notify_get_types", 00:05:21.536 "spdk_get_version", 00:05:21.536 "rpc_get_methods" 00:05:21.536 ] 00:05:21.536 05:23:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.536 05:23:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:21.536 05:23:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60539 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 60539 ']' 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 60539 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60539 00:05:21.536 killing process with pid 60539 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60539' 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 60539 00:05:21.536 05:23:01 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 60539 00:05:23.441 ************************************ 00:05:23.441 END TEST spdkcli_tcp 00:05:23.441 ************************************ 00:05:23.441 00:05:23.441 real 0m3.434s 00:05:23.441 user 0m6.317s 00:05:23.441 sys 0m0.509s 00:05:23.441 05:23:03 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.441 05:23:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.441 05:23:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.441 05:23:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.441 05:23:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.441 05:23:03 -- common/autotest_common.sh@10 -- # set +x 00:05:23.441 ************************************ 00:05:23.441 START TEST dpdk_mem_utility 00:05:23.441 ************************************ 00:05:23.442 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.701 * Looking for test storage... 00:05:23.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.701 05:23:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.701 --rc genhtml_branch_coverage=1 00:05:23.701 --rc genhtml_function_coverage=1 00:05:23.701 --rc genhtml_legend=1 00:05:23.701 --rc geninfo_all_blocks=1 00:05:23.701 --rc geninfo_unexecuted_blocks=1 00:05:23.701 00:05:23.701 ' 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.701 --rc genhtml_branch_coverage=1 00:05:23.701 --rc genhtml_function_coverage=1 00:05:23.701 --rc genhtml_legend=1 00:05:23.701 --rc geninfo_all_blocks=1 00:05:23.701 --rc geninfo_unexecuted_blocks=1 00:05:23.701 00:05:23.701 ' 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.701 --rc genhtml_branch_coverage=1 00:05:23.701 --rc genhtml_function_coverage=1 00:05:23.701 --rc genhtml_legend=1 00:05:23.701 --rc geninfo_all_blocks=1 00:05:23.701 --rc geninfo_unexecuted_blocks=1 00:05:23.701 00:05:23.701 ' 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.701 --rc genhtml_branch_coverage=1 00:05:23.701 --rc genhtml_function_coverage=1 00:05:23.701 --rc genhtml_legend=1 00:05:23.701 --rc geninfo_all_blocks=1 00:05:23.701 --rc geninfo_unexecuted_blocks=1 00:05:23.701 00:05:23.701 ' 00:05:23.701 05:23:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.701 05:23:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60650 00:05:23.701 05:23:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60650 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60650 ']' 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.701 05:23:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.701 05:23:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.701 [2024-12-16 05:23:03.940643] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:23.701 [2024-12-16 05:23:03.940810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60650 ] 00:05:23.961 [2024-12-16 05:23:04.116594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.961 [2024-12-16 05:23:04.197286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.220 [2024-12-16 05:23:04.381253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.790 05:23:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.790 05:23:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:24.790 05:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.790 05:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.790 05:23:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.790 05:23:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.790 { 00:05:24.790 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.790 } 00:05:24.790 05:23:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.790 05:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.790 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:24.790 1 heaps totaling size 824.000000 MiB 00:05:24.790 size: 824.000000 MiB heap id: 0 00:05:24.790 end heaps---------- 00:05:24.790 9 mempools totaling size 603.782043 MiB 00:05:24.790 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.790 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.790 size: 100.555481 MiB name: bdev_io_60650 00:05:24.790 size: 50.003479 MiB name: msgpool_60650 00:05:24.790 size: 36.509338 MiB name: fsdev_io_60650 00:05:24.790 size: 21.763794 MiB name: PDU_Pool 00:05:24.790 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.790 size: 4.133484 MiB name: evtpool_60650 00:05:24.790 size: 0.026123 MiB name: Session_Pool 00:05:24.790 end mempools------- 00:05:24.790 6 memzones totaling size 4.142822 MiB 00:05:24.790 size: 1.000366 MiB name: RG_ring_0_60650 00:05:24.790 size: 1.000366 MiB name: RG_ring_1_60650 00:05:24.790 size: 1.000366 MiB name: RG_ring_4_60650 00:05:24.790 size: 1.000366 MiB name: RG_ring_5_60650 00:05:24.790 size: 0.125366 MiB name: RG_ring_2_60650 00:05:24.790 size: 0.015991 MiB name: RG_ring_3_60650 00:05:24.790 end memzones------- 00:05:24.790 05:23:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.790 heap id: 0 total size: 824.000000 MiB number of busy elements: 312 number of free elements: 18 00:05:24.790 list of free elements. size: 16.782104 MiB 00:05:24.790 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:24.790 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:24.790 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:24.790 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:24.790 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:24.790 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:24.790 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:24.790 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:24.790 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:24.790 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:24.790 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:24.790 element at address: 0x20001b400000 with size: 0.563660 MiB 00:05:24.790 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:24.790 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:24.790 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:24.790 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:24.790 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:24.790 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:24.790 list of standard malloc elements. size: 199.286987 MiB 00:05:24.790 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:24.790 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:24.790 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:24.790 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:24.790 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:24.790 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:24.790 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:24.790 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:24.790 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:24.790 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:24.790 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:24.790 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:24.790 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:24.791 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:24.791 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:24.792 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:24.792 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:24.792 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:24.792 list of memzone associated elements. size: 607.930908 MiB 00:05:24.792 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:24.792 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.792 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:24.792 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.792 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:24.792 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60650_0 00:05:24.792 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:24.792 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60650_0 00:05:24.792 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:24.792 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60650_0 00:05:24.792 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:24.792 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.792 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:24.792 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.792 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:24.792 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60650_0 00:05:24.792 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:24.792 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60650 00:05:24.792 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:24.792 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60650 00:05:24.792 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:24.792 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.792 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:24.792 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.792 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:24.792 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.792 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:24.792 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.792 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:24.792 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60650 00:05:24.792 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:24.792 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60650 00:05:24.792 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:24.792 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60650 00:05:24.792 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:24.792 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60650 00:05:24.792 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:24.792 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60650 00:05:24.792 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:24.792 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60650 00:05:24.792 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:24.792 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.792 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:24.792 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.792 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:24.792 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.792 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:24.792 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60650 00:05:24.792 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:24.792 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60650 00:05:24.792 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:24.792 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.792 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:24.792 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.792 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:24.792 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60650 00:05:24.792 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:24.793 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.793 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:24.793 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60650 00:05:24.793 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:24.793 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60650 00:05:24.793 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:24.793 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60650 00:05:24.793 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:24.793 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.793 05:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.793 05:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60650 00:05:24.793 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60650 ']' 00:05:24.793 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60650 00:05:24.793 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:24.793 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.793 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60650 00:05:25.051 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.051 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.051 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60650' 00:05:25.051 killing process with pid 60650 00:05:25.051 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60650 00:05:25.051 05:23:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60650 00:05:26.979 00:05:26.979 real 0m3.217s 00:05:26.979 user 0m3.360s 00:05:26.979 sys 0m0.472s 00:05:26.979 05:23:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.979 ************************************ 00:05:26.980 END TEST dpdk_mem_utility 00:05:26.980 ************************************ 00:05:26.980 05:23:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.980 05:23:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.980 05:23:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.980 05:23:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.980 05:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:26.980 ************************************ 00:05:26.980 START TEST event 00:05:26.980 ************************************ 00:05:26.980 05:23:06 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.980 * Looking for test storage... 00:05:26.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.980 05:23:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.980 05:23:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.980 05:23:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.980 05:23:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.980 05:23:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.980 05:23:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.980 05:23:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.980 05:23:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.980 05:23:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.980 05:23:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.980 05:23:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.980 05:23:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.980 05:23:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.980 05:23:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.980 05:23:07 event -- scripts/common.sh@344 -- # case "$op" in 00:05:26.980 05:23:07 event -- scripts/common.sh@345 -- # : 1 00:05:26.980 05:23:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.980 05:23:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.980 05:23:07 event -- scripts/common.sh@365 -- # decimal 1 00:05:26.980 05:23:07 event -- scripts/common.sh@353 -- # local d=1 00:05:26.980 05:23:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.980 05:23:07 event -- scripts/common.sh@355 -- # echo 1 00:05:26.980 05:23:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.980 05:23:07 event -- scripts/common.sh@366 -- # decimal 2 00:05:26.980 05:23:07 event -- scripts/common.sh@353 -- # local d=2 00:05:26.980 05:23:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.980 05:23:07 event -- scripts/common.sh@355 -- # echo 2 00:05:26.980 05:23:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.980 05:23:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.980 05:23:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.980 05:23:07 event -- scripts/common.sh@368 -- # return 0 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.980 --rc genhtml_branch_coverage=1 00:05:26.980 --rc genhtml_function_coverage=1 00:05:26.980 --rc genhtml_legend=1 00:05:26.980 --rc geninfo_all_blocks=1 00:05:26.980 --rc geninfo_unexecuted_blocks=1 00:05:26.980 00:05:26.980 ' 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.980 --rc genhtml_branch_coverage=1 00:05:26.980 --rc genhtml_function_coverage=1 00:05:26.980 --rc genhtml_legend=1 00:05:26.980 --rc geninfo_all_blocks=1 00:05:26.980 --rc geninfo_unexecuted_blocks=1 00:05:26.980 00:05:26.980 ' 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.980 --rc genhtml_branch_coverage=1 00:05:26.980 --rc genhtml_function_coverage=1 00:05:26.980 --rc genhtml_legend=1 00:05:26.980 --rc geninfo_all_blocks=1 00:05:26.980 --rc geninfo_unexecuted_blocks=1 00:05:26.980 00:05:26.980 ' 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.980 --rc genhtml_branch_coverage=1 00:05:26.980 --rc genhtml_function_coverage=1 00:05:26.980 --rc genhtml_legend=1 00:05:26.980 --rc geninfo_all_blocks=1 00:05:26.980 --rc geninfo_unexecuted_blocks=1 00:05:26.980 00:05:26.980 ' 00:05:26.980 05:23:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:26.980 05:23:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.980 05:23:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:26.980 05:23:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.980 05:23:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.980 ************************************ 00:05:26.980 START TEST event_perf 00:05:26.980 ************************************ 00:05:26.980 05:23:07 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.980 Running I/O for 1 seconds...[2024-12-16 05:23:07.134300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:26.980 [2024-12-16 05:23:07.134472] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60747 ] 00:05:27.239 [2024-12-16 05:23:07.313798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.239 [2024-12-16 05:23:07.405044] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.239 [2024-12-16 05:23:07.405160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.239 Running I/O for 1 seconds...[2024-12-16 05:23:07.406086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.240 [2024-12-16 05:23:07.406104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.617 00:05:28.617 lcore 0: 200834 00:05:28.617 lcore 1: 200834 00:05:28.617 lcore 2: 200835 00:05:28.617 lcore 3: 200834 00:05:28.617 done. 00:05:28.617 00:05:28.617 real 0m1.533s 00:05:28.617 user 0m4.301s 00:05:28.617 sys 0m0.106s 00:05:28.617 05:23:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.617 05:23:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.617 ************************************ 00:05:28.617 END TEST event_perf 00:05:28.617 ************************************ 00:05:28.617 05:23:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.617 05:23:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:28.617 05:23:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.617 05:23:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.617 ************************************ 00:05:28.617 START TEST event_reactor 00:05:28.617 ************************************ 00:05:28.617 05:23:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.617 [2024-12-16 05:23:08.714944] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:28.617 [2024-12-16 05:23:08.715076] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60792 ] 00:05:28.876 [2024-12-16 05:23:08.878353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.876 [2024-12-16 05:23:08.959840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.255 test_start 00:05:30.255 oneshot 00:05:30.255 tick 100 00:05:30.255 tick 100 00:05:30.255 tick 250 00:05:30.255 tick 100 00:05:30.255 tick 100 00:05:30.255 tick 100 00:05:30.255 tick 250 00:05:30.255 tick 500 00:05:30.255 tick 100 00:05:30.255 tick 100 00:05:30.255 tick 250 00:05:30.255 tick 100 00:05:30.255 tick 100 00:05:30.255 test_end 00:05:30.255 00:05:30.255 real 0m1.475s 00:05:30.255 user 0m1.293s 00:05:30.255 sys 0m0.072s 00:05:30.255 05:23:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.255 ************************************ 00:05:30.255 END TEST event_reactor 00:05:30.255 ************************************ 00:05:30.255 05:23:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:30.255 05:23:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.255 05:23:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:30.255 05:23:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.255 05:23:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.255 ************************************ 00:05:30.255 START TEST event_reactor_perf 00:05:30.255 ************************************ 00:05:30.255 05:23:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.255 [2024-12-16 05:23:10.253023] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:30.255 [2024-12-16 05:23:10.253194] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60823 ] 00:05:30.255 [2024-12-16 05:23:10.431794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.514 [2024-12-16 05:23:10.521093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.891 test_start 00:05:31.891 test_end 00:05:31.891 Performance: 332978 events per second 00:05:31.891 00:05:31.891 real 0m1.516s 00:05:31.891 user 0m1.323s 00:05:31.891 sys 0m0.084s 00:05:31.891 05:23:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.891 05:23:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 ************************************ 00:05:31.891 END TEST event_reactor_perf 00:05:31.891 ************************************ 00:05:31.891 05:23:11 event -- event/event.sh@49 -- # uname -s 00:05:31.892 05:23:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.892 05:23:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.892 05:23:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.892 05:23:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.892 05:23:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 ************************************ 00:05:31.892 START TEST event_scheduler 00:05:31.892 ************************************ 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.892 * Looking for test storage... 00:05:31.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.892 05:23:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.892 --rc genhtml_branch_coverage=1 00:05:31.892 --rc genhtml_function_coverage=1 00:05:31.892 --rc genhtml_legend=1 00:05:31.892 --rc geninfo_all_blocks=1 00:05:31.892 --rc geninfo_unexecuted_blocks=1 00:05:31.892 00:05:31.892 ' 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.892 --rc genhtml_branch_coverage=1 00:05:31.892 --rc genhtml_function_coverage=1 00:05:31.892 --rc genhtml_legend=1 00:05:31.892 --rc geninfo_all_blocks=1 00:05:31.892 --rc geninfo_unexecuted_blocks=1 00:05:31.892 00:05:31.892 ' 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.892 --rc genhtml_branch_coverage=1 00:05:31.892 --rc genhtml_function_coverage=1 00:05:31.892 --rc genhtml_legend=1 00:05:31.892 --rc geninfo_all_blocks=1 00:05:31.892 --rc geninfo_unexecuted_blocks=1 00:05:31.892 00:05:31.892 ' 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.892 --rc genhtml_branch_coverage=1 00:05:31.892 --rc genhtml_function_coverage=1 00:05:31.892 --rc genhtml_legend=1 00:05:31.892 --rc geninfo_all_blocks=1 00:05:31.892 --rc geninfo_unexecuted_blocks=1 00:05:31.892 00:05:31.892 ' 00:05:31.892 05:23:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.892 05:23:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60899 00:05:31.892 05:23:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.892 05:23:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.892 05:23:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60899 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60899 ']' 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.892 05:23:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 [2024-12-16 05:23:12.056210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:31.892 [2024-12-16 05:23:12.056376] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60899 ] 00:05:32.151 [2024-12-16 05:23:12.231998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.151 [2024-12-16 05:23:12.364759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.151 [2024-12-16 05:23:12.364900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.151 [2024-12-16 05:23:12.366219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.151 [2024-12-16 05:23:12.366272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:33.089 05:23:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.089 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.089 POWER: Cannot set governor of lcore 0 to performance 00:05:33.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.089 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.089 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.089 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:33.089 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:33.089 POWER: Unable to set Power Management Environment for lcore 0 00:05:33.089 [2024-12-16 05:23:13.013257] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:33.089 [2024-12-16 05:23:13.013342] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:33.089 [2024-12-16 05:23:13.013431] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:33.089 [2024-12-16 05:23:13.013522] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:33.089 [2024-12-16 05:23:13.013632] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:33.089 [2024-12-16 05:23:13.013728] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 [2024-12-16 05:23:13.166951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.089 [2024-12-16 05:23:13.255329] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 ************************************ 00:05:33.089 START TEST scheduler_create_thread 00:05:33.089 ************************************ 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 2 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 3 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 4 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 5 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 6 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 7 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 8 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 9 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.089 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.349 10 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.349 05:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.726 05:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.726 05:23:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:34.726 05:23:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:34.726 05:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.726 05:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.669 ************************************ 00:05:35.669 END TEST scheduler_create_thread 00:05:35.669 ************************************ 00:05:35.669 05:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.669 00:05:35.669 real 0m2.617s 00:05:35.669 user 0m0.019s 00:05:35.669 sys 0m0.006s 00:05:35.669 05:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.669 05:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.928 05:23:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.928 05:23:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60899 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60899 ']' 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60899 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60899 00:05:35.928 killing process with pid 60899 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60899' 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60899 00:05:35.928 05:23:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60899 00:05:36.186 [2024-12-16 05:23:16.365674] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.123 00:05:37.123 real 0m5.505s 00:05:37.123 user 0m9.809s 00:05:37.123 sys 0m0.401s 00:05:37.124 05:23:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.124 ************************************ 00:05:37.124 END TEST event_scheduler 00:05:37.124 ************************************ 00:05:37.124 05:23:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.124 05:23:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.124 05:23:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.124 05:23:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.124 05:23:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.124 05:23:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.124 ************************************ 00:05:37.124 START TEST app_repeat 00:05:37.124 ************************************ 00:05:37.124 05:23:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61005 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61005' 00:05:37.124 Process app_repeat pid: 61005 00:05:37.124 spdk_app_start Round 0 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.124 05:23:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61005 /var/tmp/spdk-nbd.sock 00:05:37.124 05:23:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61005 ']' 00:05:37.124 05:23:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.124 05:23:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.124 05:23:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.124 05:23:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.124 05:23:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.383 [2024-12-16 05:23:17.395782] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:37.383 [2024-12-16 05:23:17.396496] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 00:05:37.383 [2024-12-16 05:23:17.565686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.642 [2024-12-16 05:23:17.658554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.642 [2024-12-16 05:23:17.658559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.642 [2024-12-16 05:23:17.826192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.210 05:23:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.210 05:23:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.210 05:23:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.468 Malloc0 00:05:38.468 05:23:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.036 Malloc1 00:05:39.036 05:23:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.036 05:23:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.296 /dev/nbd0 00:05:39.296 05:23:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.296 05:23:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.296 1+0 records in 00:05:39.296 1+0 records out 00:05:39.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041797 s, 9.8 MB/s 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.296 05:23:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.296 05:23:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.296 05:23:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.296 05:23:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.555 /dev/nbd1 00:05:39.555 05:23:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.555 05:23:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.555 1+0 records in 00:05:39.555 1+0 records out 00:05:39.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026557 s, 15.4 MB/s 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.555 05:23:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.555 05:23:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.555 05:23:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.555 05:23:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.555 05:23:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.555 05:23:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.815 05:23:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.815 { 00:05:39.815 "nbd_device": "/dev/nbd0", 00:05:39.815 "bdev_name": "Malloc0" 00:05:39.815 }, 00:05:39.815 { 00:05:39.815 "nbd_device": "/dev/nbd1", 00:05:39.815 "bdev_name": "Malloc1" 00:05:39.815 } 00:05:39.815 ]' 00:05:39.815 05:23:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.815 { 00:05:39.815 "nbd_device": "/dev/nbd0", 00:05:39.815 "bdev_name": "Malloc0" 00:05:39.815 }, 00:05:39.815 { 00:05:39.815 "nbd_device": "/dev/nbd1", 00:05:39.815 "bdev_name": "Malloc1" 00:05:39.815 } 00:05:39.815 ]' 00:05:39.815 05:23:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.815 /dev/nbd1' 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.815 /dev/nbd1' 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.815 256+0 records in 00:05:39.815 256+0 records out 00:05:39.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00778184 s, 135 MB/s 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.815 256+0 records in 00:05:39.815 256+0 records out 00:05:39.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258063 s, 40.6 MB/s 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.815 05:23:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.075 256+0 records in 00:05:40.075 256+0 records out 00:05:40.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319364 s, 32.8 MB/s 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.075 05:23:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.334 05:23:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.594 05:23:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.853 05:23:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.853 05:23:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.113 05:23:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.050 [2024-12-16 05:23:22.164583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.050 [2024-12-16 05:23:22.247776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.050 [2024-12-16 05:23:22.247779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.309 [2024-12-16 05:23:22.392211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.309 [2024-12-16 05:23:22.392358] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.309 [2024-12-16 05:23:22.392384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.215 spdk_app_start Round 1 00:05:44.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.215 05:23:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.215 05:23:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.215 05:23:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61005 /var/tmp/spdk-nbd.sock 00:05:44.215 05:23:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61005 ']' 00:05:44.215 05:23:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.215 05:23:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.215 05:23:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.215 05:23:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.215 05:23:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.474 05:23:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.474 05:23:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.474 05:23:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.733 Malloc0 00:05:44.733 05:23:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.993 Malloc1 00:05:44.993 05:23:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.993 05:23:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.250 /dev/nbd0 00:05:45.250 05:23:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.250 05:23:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.250 05:23:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.250 1+0 records in 00:05:45.250 1+0 records out 00:05:45.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228578 s, 17.9 MB/s 00:05:45.509 05:23:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.509 05:23:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.509 05:23:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.509 05:23:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.509 05:23:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.509 05:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.509 05:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.509 05:23:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.767 /dev/nbd1 00:05:45.767 05:23:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.767 05:23:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.767 1+0 records in 00:05:45.767 1+0 records out 00:05:45.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485495 s, 8.4 MB/s 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.767 05:23:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.767 05:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.767 05:23:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.767 05:23:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.767 05:23:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.767 05:23:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.027 { 00:05:46.027 "nbd_device": "/dev/nbd0", 00:05:46.027 "bdev_name": "Malloc0" 00:05:46.027 }, 00:05:46.027 { 00:05:46.027 "nbd_device": "/dev/nbd1", 00:05:46.027 "bdev_name": "Malloc1" 00:05:46.027 } 00:05:46.027 ]' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.027 { 00:05:46.027 "nbd_device": "/dev/nbd0", 00:05:46.027 "bdev_name": "Malloc0" 00:05:46.027 }, 00:05:46.027 { 00:05:46.027 "nbd_device": "/dev/nbd1", 00:05:46.027 "bdev_name": "Malloc1" 00:05:46.027 } 00:05:46.027 ]' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.027 /dev/nbd1' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.027 /dev/nbd1' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.027 256+0 records in 00:05:46.027 256+0 records out 00:05:46.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00884126 s, 119 MB/s 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.027 256+0 records in 00:05:46.027 256+0 records out 00:05:46.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313523 s, 33.4 MB/s 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.027 256+0 records in 00:05:46.027 256+0 records out 00:05:46.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299082 s, 35.1 MB/s 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.027 05:23:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.596 05:23:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.855 05:23:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.113 05:23:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.113 05:23:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.681 05:23:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.618 [2024-12-16 05:23:28.563099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.618 [2024-12-16 05:23:28.648098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.618 [2024-12-16 05:23:28.648105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.618 [2024-12-16 05:23:28.791222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.618 [2024-12-16 05:23:28.791408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.618 [2024-12-16 05:23:28.791430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.524 spdk_app_start Round 2 00:05:50.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.524 05:23:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.524 05:23:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.524 05:23:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61005 /var/tmp/spdk-nbd.sock 00:05:50.524 05:23:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61005 ']' 00:05:50.524 05:23:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.524 05:23:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.524 05:23:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.524 05:23:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.524 05:23:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.783 05:23:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.783 05:23:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.783 05:23:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.351 Malloc0 00:05:51.351 05:23:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.610 Malloc1 00:05:51.610 05:23:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.610 05:23:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.869 /dev/nbd0 00:05:51.869 05:23:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.869 05:23:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.869 1+0 records in 00:05:51.869 1+0 records out 00:05:51.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268686 s, 15.2 MB/s 00:05:51.869 05:23:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.869 05:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.869 05:23:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.869 05:23:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.869 05:23:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.869 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.869 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.869 05:23:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.129 /dev/nbd1 00:05:52.129 05:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.129 05:23:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.129 1+0 records in 00:05:52.129 1+0 records out 00:05:52.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027699 s, 14.8 MB/s 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.129 05:23:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.129 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.129 05:23:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.129 05:23:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.129 05:23:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.129 05:23:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.388 05:23:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.388 { 00:05:52.388 "nbd_device": "/dev/nbd0", 00:05:52.388 "bdev_name": "Malloc0" 00:05:52.388 }, 00:05:52.388 { 00:05:52.388 "nbd_device": "/dev/nbd1", 00:05:52.388 "bdev_name": "Malloc1" 00:05:52.388 } 00:05:52.388 ]' 00:05:52.388 05:23:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.388 { 00:05:52.388 "nbd_device": "/dev/nbd0", 00:05:52.388 "bdev_name": "Malloc0" 00:05:52.388 }, 00:05:52.388 { 00:05:52.388 "nbd_device": "/dev/nbd1", 00:05:52.388 "bdev_name": "Malloc1" 00:05:52.388 } 00:05:52.388 ]' 00:05:52.388 05:23:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.647 /dev/nbd1' 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.647 /dev/nbd1' 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.647 05:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.648 256+0 records in 00:05:52.648 256+0 records out 00:05:52.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00745842 s, 141 MB/s 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.648 256+0 records in 00:05:52.648 256+0 records out 00:05:52.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329099 s, 31.9 MB/s 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.648 256+0 records in 00:05:52.648 256+0 records out 00:05:52.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0364767 s, 28.7 MB/s 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.648 05:23:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.907 05:23:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.167 05:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.426 05:23:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.426 05:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.426 05:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.686 05:23:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.686 05:23:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.945 05:23:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.882 [2024-12-16 05:23:35.055080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.141 [2024-12-16 05:23:35.143423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.141 [2024-12-16 05:23:35.143433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.141 [2024-12-16 05:23:35.291822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.141 [2024-12-16 05:23:35.291951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.141 [2024-12-16 05:23:35.291977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.050 05:23:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61005 /var/tmp/spdk-nbd.sock 00:05:57.050 05:23:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 61005 ']' 00:05:57.050 05:23:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.050 05:23:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.050 05:23:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.050 05:23:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.050 05:23:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:57.309 05:23:37 event.app_repeat -- event/event.sh@39 -- # killprocess 61005 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 61005 ']' 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 61005 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61005 00:05:57.309 killing process with pid 61005 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61005' 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@973 -- # kill 61005 00:05:57.309 05:23:37 event.app_repeat -- common/autotest_common.sh@978 -- # wait 61005 00:05:58.246 spdk_app_start is called in Round 0. 00:05:58.246 Shutdown signal received, stop current app iteration 00:05:58.246 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:58.246 spdk_app_start is called in Round 1. 00:05:58.246 Shutdown signal received, stop current app iteration 00:05:58.246 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:58.246 spdk_app_start is called in Round 2. 00:05:58.246 Shutdown signal received, stop current app iteration 00:05:58.246 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:58.246 spdk_app_start is called in Round 3. 00:05:58.246 Shutdown signal received, stop current app iteration 00:05:58.246 05:23:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.246 ************************************ 00:05:58.246 END TEST app_repeat 00:05:58.246 ************************************ 00:05:58.246 05:23:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:58.246 00:05:58.246 real 0m21.072s 00:05:58.246 user 0m47.214s 00:05:58.246 sys 0m2.701s 00:05:58.246 05:23:38 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.246 05:23:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.246 05:23:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.246 05:23:38 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.246 05:23:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.246 05:23:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.246 05:23:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.246 ************************************ 00:05:58.246 START TEST cpu_locks 00:05:58.246 ************************************ 00:05:58.246 05:23:38 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.506 * Looking for test storage... 00:05:58.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.506 05:23:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.506 --rc genhtml_branch_coverage=1 00:05:58.506 --rc genhtml_function_coverage=1 00:05:58.506 --rc genhtml_legend=1 00:05:58.506 --rc geninfo_all_blocks=1 00:05:58.506 --rc geninfo_unexecuted_blocks=1 00:05:58.506 00:05:58.506 ' 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.506 --rc genhtml_branch_coverage=1 00:05:58.506 --rc genhtml_function_coverage=1 00:05:58.506 --rc genhtml_legend=1 00:05:58.506 --rc geninfo_all_blocks=1 00:05:58.506 --rc geninfo_unexecuted_blocks=1 00:05:58.506 00:05:58.506 ' 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.506 --rc genhtml_branch_coverage=1 00:05:58.506 --rc genhtml_function_coverage=1 00:05:58.506 --rc genhtml_legend=1 00:05:58.506 --rc geninfo_all_blocks=1 00:05:58.506 --rc geninfo_unexecuted_blocks=1 00:05:58.506 00:05:58.506 ' 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:58.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.506 --rc genhtml_branch_coverage=1 00:05:58.506 --rc genhtml_function_coverage=1 00:05:58.506 --rc genhtml_legend=1 00:05:58.506 --rc geninfo_all_blocks=1 00:05:58.506 --rc geninfo_unexecuted_blocks=1 00:05:58.506 00:05:58.506 ' 00:05:58.506 05:23:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:58.506 05:23:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:58.506 05:23:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:58.506 05:23:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.506 05:23:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.506 ************************************ 00:05:58.506 START TEST default_locks 00:05:58.506 ************************************ 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61469 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61469 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61469 ']' 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.506 05:23:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.765 [2024-12-16 05:23:38.783008] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:58.765 [2024-12-16 05:23:38.783894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61469 ] 00:05:58.765 [2024-12-16 05:23:38.957063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.024 [2024-12-16 05:23:39.059365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.283 [2024-12-16 05:23:39.284877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.542 05:23:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.542 05:23:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:59.542 05:23:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61469 00:05:59.542 05:23:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61469 00:05:59.542 05:23:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.801 05:23:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61469 00:05:59.801 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61469 ']' 00:05:59.801 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61469 00:05:59.801 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.801 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.801 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61469 00:06:00.059 killing process with pid 61469 00:06:00.059 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.059 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.059 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61469' 00:06:00.059 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61469 00:06:00.059 05:23:40 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61469 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61469 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61469 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61469 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61469 ']' 00:06:01.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.964 ERROR: process (pid: 61469) is no longer running 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.964 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61469) - No such process 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.964 00:06:01.964 real 0m3.164s 00:06:01.964 user 0m3.251s 00:06:01.964 sys 0m0.557s 00:06:01.964 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.964 ************************************ 00:06:01.965 END TEST default_locks 00:06:01.965 ************************************ 00:06:01.965 05:23:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 05:23:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.965 05:23:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.965 05:23:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.965 05:23:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 ************************************ 00:06:01.965 START TEST default_locks_via_rpc 00:06:01.965 ************************************ 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:01.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61533 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61533 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61533 ']' 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.965 05:23:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.965 [2024-12-16 05:23:42.024963] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:01.965 [2024-12-16 05:23:42.025363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61533 ] 00:06:01.965 [2024-12-16 05:23:42.206956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.223 [2024-12-16 05:23:42.300243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.483 [2024-12-16 05:23:42.483708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61533 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61533 00:06:02.742 05:23:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61533 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61533 ']' 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61533 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61533 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61533' 00:06:03.310 killing process with pid 61533 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61533 00:06:03.310 05:23:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61533 00:06:05.214 00:06:05.214 real 0m3.247s 00:06:05.214 user 0m3.397s 00:06:05.214 sys 0m0.592s 00:06:05.214 05:23:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.214 05:23:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.214 ************************************ 00:06:05.215 END TEST default_locks_via_rpc 00:06:05.215 ************************************ 00:06:05.215 05:23:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:05.215 05:23:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.215 05:23:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.215 05:23:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.215 ************************************ 00:06:05.215 START TEST non_locking_app_on_locked_coremask 00:06:05.215 ************************************ 00:06:05.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61601 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61601 /var/tmp/spdk.sock 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61601 ']' 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.215 05:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.215 [2024-12-16 05:23:45.291821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:05.215 [2024-12-16 05:23:45.292862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61601 ] 00:06:05.474 [2024-12-16 05:23:45.478659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.474 [2024-12-16 05:23:45.559691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.733 [2024-12-16 05:23:45.751888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61617 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61617 /var/tmp/spdk2.sock 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61617 ']' 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.300 05:23:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.300 [2024-12-16 05:23:46.388852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:06.300 [2024-12-16 05:23:46.389034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61617 ] 00:06:06.559 [2024-12-16 05:23:46.574794] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.559 [2024-12-16 05:23:46.574854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.559 [2024-12-16 05:23:46.745790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.128 [2024-12-16 05:23:47.125539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.065 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.065 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.065 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61601 00:06:08.065 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61601 00:06:08.065 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.633 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61601 00:06:08.633 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61601 ']' 00:06:08.633 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61601 00:06:08.633 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.633 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.633 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61601 00:06:08.891 killing process with pid 61601 00:06:08.891 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.891 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.891 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61601' 00:06:08.891 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61601 00:06:08.891 05:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61601 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61617 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61617 ']' 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61617 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61617 00:06:12.180 killing process with pid 61617 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61617' 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61617 00:06:12.180 05:23:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61617 00:06:14.114 00:06:14.114 real 0m8.949s 00:06:14.114 user 0m9.544s 00:06:14.114 sys 0m1.179s 00:06:14.114 05:23:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.114 05:23:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.114 ************************************ 00:06:14.114 END TEST non_locking_app_on_locked_coremask 00:06:14.114 ************************************ 00:06:14.114 05:23:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.114 05:23:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.114 05:23:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.114 05:23:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.114 ************************************ 00:06:14.114 START TEST locking_app_on_unlocked_coremask 00:06:14.114 ************************************ 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61738 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61738 /var/tmp/spdk.sock 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61738 ']' 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.114 05:23:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.114 [2024-12-16 05:23:54.324226] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:14.114 [2024-12-16 05:23:54.324403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61738 ] 00:06:14.373 [2024-12-16 05:23:54.495826] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.373 [2024-12-16 05:23:54.495876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.373 [2024-12-16 05:23:54.593313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.632 [2024-12-16 05:23:54.776835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61754 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61754 /var/tmp/spdk2.sock 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61754 ']' 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.200 05:23:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.200 [2024-12-16 05:23:55.368515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:15.200 [2024-12-16 05:23:55.369034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61754 ] 00:06:15.458 [2024-12-16 05:23:55.559489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.717 [2024-12-16 05:23:55.738878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.976 [2024-12-16 05:23:56.110126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.914 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.914 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.914 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61754 00:06:16.914 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.914 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61754 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61738 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61738 ']' 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61738 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61738 00:06:17.851 killing process with pid 61738 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61738' 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61738 00:06:17.851 05:23:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61738 00:06:21.140 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61754 00:06:21.140 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61754 ']' 00:06:21.140 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61754 00:06:21.140 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.140 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.140 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61754 00:06:21.399 killing process with pid 61754 00:06:21.399 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.399 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.399 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61754' 00:06:21.399 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61754 00:06:21.399 05:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61754 00:06:23.303 ************************************ 00:06:23.303 END TEST locking_app_on_unlocked_coremask 00:06:23.303 ************************************ 00:06:23.303 00:06:23.303 real 0m9.061s 00:06:23.303 user 0m9.570s 00:06:23.303 sys 0m1.188s 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.303 05:24:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.303 05:24:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.303 05:24:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.303 05:24:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.303 ************************************ 00:06:23.303 START TEST locking_app_on_locked_coremask 00:06:23.303 ************************************ 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61881 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61881 /var/tmp/spdk.sock 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61881 ']' 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.303 05:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.303 [2024-12-16 05:24:03.441575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:23.304 [2024-12-16 05:24:03.441791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61881 ] 00:06:23.563 [2024-12-16 05:24:03.621420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.563 [2024-12-16 05:24:03.706931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.822 [2024-12-16 05:24:03.902285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61897 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61897 /var/tmp/spdk2.sock 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61897 /var/tmp/spdk2.sock 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61897 /var/tmp/spdk2.sock 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61897 ']' 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.390 05:24:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.390 [2024-12-16 05:24:04.526061] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:24.390 [2024-12-16 05:24:04.526264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61897 ] 00:06:24.649 [2024-12-16 05:24:04.714874] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61881 has claimed it. 00:06:24.649 [2024-12-16 05:24:04.714963] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.908 ERROR: process (pid: 61897) is no longer running 00:06:24.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61897) - No such process 00:06:24.908 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.908 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:24.908 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:24.908 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.909 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.909 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.909 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61881 00:06:24.909 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61881 00:06:24.909 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61881 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61881 ']' 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61881 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61881 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.476 killing process with pid 61881 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61881' 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61881 00:06:25.476 05:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61881 00:06:27.382 00:06:27.382 real 0m3.921s 00:06:27.382 user 0m4.359s 00:06:27.382 sys 0m0.685s 00:06:27.382 05:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.382 05:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.382 ************************************ 00:06:27.382 END TEST locking_app_on_locked_coremask 00:06:27.382 ************************************ 00:06:27.382 05:24:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.382 05:24:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.382 05:24:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.382 05:24:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.382 ************************************ 00:06:27.382 START TEST locking_overlapped_coremask 00:06:27.382 ************************************ 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61961 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61961 /var/tmp/spdk.sock 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61961 ']' 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.382 05:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.382 [2024-12-16 05:24:07.418247] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:27.382 [2024-12-16 05:24:07.418939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:06:27.382 [2024-12-16 05:24:07.595794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.641 [2024-12-16 05:24:07.682356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.641 [2024-12-16 05:24:07.682503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.641 [2024-12-16 05:24:07.682522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.641 [2024-12-16 05:24:07.891427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61979 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61979 /var/tmp/spdk2.sock 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61979 /var/tmp/spdk2.sock 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61979 /var/tmp/spdk2.sock 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61979 ']' 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.209 05:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.468 [2024-12-16 05:24:08.477393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:28.468 [2024-12-16 05:24:08.477564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61979 ] 00:06:28.468 [2024-12-16 05:24:08.660310] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61961 has claimed it. 00:06:28.468 [2024-12-16 05:24:08.660408] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.036 ERROR: process (pid: 61979) is no longer running 00:06:29.036 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61979) - No such process 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61961 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61961 ']' 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61961 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61961 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.036 killing process with pid 61961 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61961' 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61961 00:06:29.036 05:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61961 00:06:30.941 00:06:30.941 real 0m3.741s 00:06:30.941 user 0m10.277s 00:06:30.941 sys 0m0.531s 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.941 ************************************ 00:06:30.941 END TEST locking_overlapped_coremask 00:06:30.941 ************************************ 00:06:30.941 05:24:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.941 05:24:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.941 05:24:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.941 05:24:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.941 ************************************ 00:06:30.941 START TEST locking_overlapped_coremask_via_rpc 00:06:30.941 ************************************ 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62038 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62038 /var/tmp/spdk.sock 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62038 ']' 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.941 05:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.941 [2024-12-16 05:24:11.183815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:30.941 [2024-12-16 05:24:11.183959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62038 ] 00:06:31.200 [2024-12-16 05:24:11.349736] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.200 [2024-12-16 05:24:11.349792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.200 [2024-12-16 05:24:11.435465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.200 [2024-12-16 05:24:11.435560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.200 [2024-12-16 05:24:11.435567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.459 [2024-12-16 05:24:11.627964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62056 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62056 /var/tmp/spdk2.sock 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62056 ']' 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.027 05:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.027 [2024-12-16 05:24:12.210260] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:32.027 [2024-12-16 05:24:12.210658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62056 ] 00:06:32.286 [2024-12-16 05:24:12.395326] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.286 [2024-12-16 05:24:12.395395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.545 [2024-12-16 05:24:12.584575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.545 [2024-12-16 05:24:12.587707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.545 [2024-12-16 05:24:12.587714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.804 [2024-12-16 05:24:12.997157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.741 05:24:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.741 [2024-12-16 05:24:13.992913] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62038 has claimed it. 00:06:34.001 request: 00:06:34.001 { 00:06:34.001 "method": "framework_enable_cpumask_locks", 00:06:34.001 "req_id": 1 00:06:34.001 } 00:06:34.001 Got JSON-RPC error response 00:06:34.001 response: 00:06:34.001 { 00:06:34.001 "code": -32603, 00:06:34.001 "message": "Failed to claim CPU core: 2" 00:06:34.001 } 00:06:34.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62038 /var/tmp/spdk.sock 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62038 ']' 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.001 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62056 /var/tmp/spdk2.sock 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 62056 ']' 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.260 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.519 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.519 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.519 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.519 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.519 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.519 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.519 00:06:34.519 real 0m3.527s 00:06:34.519 user 0m1.503s 00:06:34.519 sys 0m0.190s 00:06:34.520 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.520 05:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.520 ************************************ 00:06:34.520 END TEST locking_overlapped_coremask_via_rpc 00:06:34.520 ************************************ 00:06:34.520 05:24:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.520 05:24:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62038 ]] 00:06:34.520 05:24:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62038 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62038 ']' 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62038 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62038 00:06:34.520 killing process with pid 62038 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62038' 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 62038 00:06:34.520 05:24:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 62038 00:06:36.425 05:24:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62056 ]] 00:06:36.425 05:24:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62056 00:06:36.425 05:24:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62056 ']' 00:06:36.425 05:24:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62056 00:06:36.425 05:24:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:36.425 05:24:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.425 05:24:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62056 00:06:36.684 killing process with pid 62056 00:06:36.684 05:24:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:36.684 05:24:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:36.684 05:24:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62056' 00:06:36.684 05:24:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 62056 00:06:36.684 05:24:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 62056 00:06:38.619 05:24:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.619 05:24:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:38.619 05:24:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62038 ]] 00:06:38.619 05:24:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62038 00:06:38.619 05:24:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62038 ']' 00:06:38.619 05:24:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62038 00:06:38.619 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (62038) - No such process 00:06:38.619 Process with pid 62038 is not found 00:06:38.620 05:24:18 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 62038 is not found' 00:06:38.620 05:24:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62056 ]] 00:06:38.620 05:24:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62056 00:06:38.620 05:24:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 62056 ']' 00:06:38.620 05:24:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 62056 00:06:38.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (62056) - No such process 00:06:38.620 Process with pid 62056 is not found 00:06:38.620 05:24:18 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 62056 is not found' 00:06:38.620 05:24:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.620 00:06:38.620 real 0m40.066s 00:06:38.620 user 1m10.401s 00:06:38.620 sys 0m5.891s 00:06:38.620 05:24:18 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.620 05:24:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.620 ************************************ 00:06:38.620 END TEST cpu_locks 00:06:38.620 ************************************ 00:06:38.620 00:06:38.620 real 1m11.684s 00:06:38.620 user 2m14.565s 00:06:38.620 sys 0m9.521s 00:06:38.620 05:24:18 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.620 05:24:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.620 ************************************ 00:06:38.620 END TEST event 00:06:38.620 ************************************ 00:06:38.620 05:24:18 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:38.620 05:24:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.620 05:24:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.620 05:24:18 -- common/autotest_common.sh@10 -- # set +x 00:06:38.620 ************************************ 00:06:38.620 START TEST thread 00:06:38.620 ************************************ 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:38.620 * Looking for test storage... 00:06:38.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.620 05:24:18 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.620 05:24:18 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.620 05:24:18 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.620 05:24:18 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.620 05:24:18 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.620 05:24:18 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.620 05:24:18 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.620 05:24:18 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.620 05:24:18 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.620 05:24:18 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.620 05:24:18 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.620 05:24:18 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:38.620 05:24:18 thread -- scripts/common.sh@345 -- # : 1 00:06:38.620 05:24:18 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.620 05:24:18 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.620 05:24:18 thread -- scripts/common.sh@365 -- # decimal 1 00:06:38.620 05:24:18 thread -- scripts/common.sh@353 -- # local d=1 00:06:38.620 05:24:18 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.620 05:24:18 thread -- scripts/common.sh@355 -- # echo 1 00:06:38.620 05:24:18 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.620 05:24:18 thread -- scripts/common.sh@366 -- # decimal 2 00:06:38.620 05:24:18 thread -- scripts/common.sh@353 -- # local d=2 00:06:38.620 05:24:18 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.620 05:24:18 thread -- scripts/common.sh@355 -- # echo 2 00:06:38.620 05:24:18 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.620 05:24:18 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.620 05:24:18 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.620 05:24:18 thread -- scripts/common.sh@368 -- # return 0 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.620 --rc genhtml_branch_coverage=1 00:06:38.620 --rc genhtml_function_coverage=1 00:06:38.620 --rc genhtml_legend=1 00:06:38.620 --rc geninfo_all_blocks=1 00:06:38.620 --rc geninfo_unexecuted_blocks=1 00:06:38.620 00:06:38.620 ' 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.620 --rc genhtml_branch_coverage=1 00:06:38.620 --rc genhtml_function_coverage=1 00:06:38.620 --rc genhtml_legend=1 00:06:38.620 --rc geninfo_all_blocks=1 00:06:38.620 --rc geninfo_unexecuted_blocks=1 00:06:38.620 00:06:38.620 ' 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.620 --rc genhtml_branch_coverage=1 00:06:38.620 --rc genhtml_function_coverage=1 00:06:38.620 --rc genhtml_legend=1 00:06:38.620 --rc geninfo_all_blocks=1 00:06:38.620 --rc geninfo_unexecuted_blocks=1 00:06:38.620 00:06:38.620 ' 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.620 --rc genhtml_branch_coverage=1 00:06:38.620 --rc genhtml_function_coverage=1 00:06:38.620 --rc genhtml_legend=1 00:06:38.620 --rc geninfo_all_blocks=1 00:06:38.620 --rc geninfo_unexecuted_blocks=1 00:06:38.620 00:06:38.620 ' 00:06:38.620 05:24:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.620 05:24:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.620 ************************************ 00:06:38.620 START TEST thread_poller_perf 00:06:38.620 ************************************ 00:06:38.620 05:24:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.879 [2024-12-16 05:24:18.875742] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:38.879 [2024-12-16 05:24:18.875924] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:06:38.879 [2024-12-16 05:24:19.056678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.137 [2024-12-16 05:24:19.186343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.137 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.513 [2024-12-16T05:24:20.772Z] ====================================== 00:06:40.513 [2024-12-16T05:24:20.772Z] busy:2211075446 (cyc) 00:06:40.513 [2024-12-16T05:24:20.772Z] total_run_count: 324000 00:06:40.513 [2024-12-16T05:24:20.772Z] tsc_hz: 2200000000 (cyc) 00:06:40.513 [2024-12-16T05:24:20.772Z] ====================================== 00:06:40.513 [2024-12-16T05:24:20.772Z] poller_cost: 6824 (cyc), 3101 (nsec) 00:06:40.513 00:06:40.513 real 0m1.554s 00:06:40.513 user 0m1.353s 00:06:40.513 sys 0m0.091s 00:06:40.513 05:24:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.513 05:24:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.513 ************************************ 00:06:40.513 END TEST thread_poller_perf 00:06:40.513 ************************************ 00:06:40.513 05:24:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.513 05:24:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:40.513 05:24:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.513 05:24:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.513 ************************************ 00:06:40.513 START TEST thread_poller_perf 00:06:40.513 ************************************ 00:06:40.513 05:24:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.513 [2024-12-16 05:24:20.491194] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:40.513 [2024-12-16 05:24:20.491514] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62269 ] 00:06:40.513 [2024-12-16 05:24:20.677053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.772 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.772 [2024-12-16 05:24:20.801030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.153 [2024-12-16T05:24:22.412Z] ====================================== 00:06:42.153 [2024-12-16T05:24:22.412Z] busy:2203725638 (cyc) 00:06:42.153 [2024-12-16T05:24:22.412Z] total_run_count: 4188000 00:06:42.153 [2024-12-16T05:24:22.412Z] tsc_hz: 2200000000 (cyc) 00:06:42.153 [2024-12-16T05:24:22.412Z] ====================================== 00:06:42.153 [2024-12-16T05:24:22.412Z] poller_cost: 526 (cyc), 239 (nsec) 00:06:42.153 00:06:42.153 real 0m1.592s 00:06:42.153 user 0m1.390s 00:06:42.153 sys 0m0.091s 00:06:42.153 ************************************ 00:06:42.153 END TEST thread_poller_perf 00:06:42.153 ************************************ 00:06:42.153 05:24:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.153 05:24:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.153 05:24:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.153 ************************************ 00:06:42.153 END TEST thread 00:06:42.153 ************************************ 00:06:42.153 00:06:42.153 real 0m3.457s 00:06:42.153 user 0m2.892s 00:06:42.153 sys 0m0.341s 00:06:42.153 05:24:22 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.153 05:24:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.153 05:24:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:42.153 05:24:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.153 05:24:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.153 05:24:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.153 05:24:22 -- common/autotest_common.sh@10 -- # set +x 00:06:42.153 ************************************ 00:06:42.153 START TEST app_cmdline 00:06:42.153 ************************************ 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.153 * Looking for test storage... 00:06:42.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.153 05:24:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.153 --rc genhtml_branch_coverage=1 00:06:42.153 --rc genhtml_function_coverage=1 00:06:42.153 --rc genhtml_legend=1 00:06:42.153 --rc geninfo_all_blocks=1 00:06:42.153 --rc geninfo_unexecuted_blocks=1 00:06:42.153 00:06:42.153 ' 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.153 --rc genhtml_branch_coverage=1 00:06:42.153 --rc genhtml_function_coverage=1 00:06:42.153 --rc genhtml_legend=1 00:06:42.153 --rc geninfo_all_blocks=1 00:06:42.153 --rc geninfo_unexecuted_blocks=1 00:06:42.153 00:06:42.153 ' 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.153 --rc genhtml_branch_coverage=1 00:06:42.153 --rc genhtml_function_coverage=1 00:06:42.153 --rc genhtml_legend=1 00:06:42.153 --rc geninfo_all_blocks=1 00:06:42.153 --rc geninfo_unexecuted_blocks=1 00:06:42.153 00:06:42.153 ' 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.153 --rc genhtml_branch_coverage=1 00:06:42.153 --rc genhtml_function_coverage=1 00:06:42.153 --rc genhtml_legend=1 00:06:42.153 --rc geninfo_all_blocks=1 00:06:42.153 --rc geninfo_unexecuted_blocks=1 00:06:42.153 00:06:42.153 ' 00:06:42.153 05:24:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.153 05:24:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62358 00:06:42.153 05:24:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62358 00:06:42.153 05:24:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 62358 ']' 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.153 05:24:22 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.154 05:24:22 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.154 05:24:22 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.154 05:24:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.413 [2024-12-16 05:24:22.470676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:42.413 [2024-12-16 05:24:22.471135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62358 ] 00:06:42.413 [2024-12-16 05:24:22.648788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.672 [2024-12-16 05:24:22.735639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.931 [2024-12-16 05:24:22.933415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.189 05:24:23 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.189 05:24:23 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:43.189 05:24:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:43.448 { 00:06:43.448 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:43.448 "fields": { 00:06:43.448 "major": 25, 00:06:43.448 "minor": 1, 00:06:43.448 "patch": 0, 00:06:43.448 "suffix": "-pre", 00:06:43.448 "commit": "e01cb43b8" 00:06:43.448 } 00:06:43.448 } 00:06:43.448 05:24:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.448 05:24:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.448 05:24:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.448 05:24:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.448 05:24:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.448 05:24:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.448 05:24:23 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.448 05:24:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.448 05:24:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:43.448 05:24:23 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.707 05:24:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.707 05:24:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.707 05:24:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:43.707 05:24:23 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.966 request: 00:06:43.966 { 00:06:43.966 "method": "env_dpdk_get_mem_stats", 00:06:43.966 "req_id": 1 00:06:43.966 } 00:06:43.966 Got JSON-RPC error response 00:06:43.966 response: 00:06:43.966 { 00:06:43.966 "code": -32601, 00:06:43.966 "message": "Method not found" 00:06:43.966 } 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.966 05:24:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62358 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 62358 ']' 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 62358 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62358 00:06:43.966 killing process with pid 62358 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62358' 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@973 -- # kill 62358 00:06:43.966 05:24:24 app_cmdline -- common/autotest_common.sh@978 -- # wait 62358 00:06:45.870 00:06:45.870 real 0m3.803s 00:06:45.870 user 0m4.359s 00:06:45.870 sys 0m0.505s 00:06:45.870 05:24:25 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.870 ************************************ 00:06:45.870 END TEST app_cmdline 00:06:45.870 ************************************ 00:06:45.870 05:24:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.870 05:24:25 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.870 05:24:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.870 05:24:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.870 05:24:25 -- common/autotest_common.sh@10 -- # set +x 00:06:45.870 ************************************ 00:06:45.870 START TEST version 00:06:45.870 ************************************ 00:06:45.870 05:24:26 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.870 * Looking for test storage... 00:06:45.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:45.870 05:24:26 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.870 05:24:26 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.870 05:24:26 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.130 05:24:26 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.130 05:24:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.130 05:24:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.130 05:24:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.130 05:24:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.130 05:24:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.130 05:24:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.130 05:24:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.130 05:24:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.130 05:24:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.130 05:24:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.130 05:24:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.130 05:24:26 version -- scripts/common.sh@344 -- # case "$op" in 00:06:46.130 05:24:26 version -- scripts/common.sh@345 -- # : 1 00:06:46.130 05:24:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.130 05:24:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.130 05:24:26 version -- scripts/common.sh@365 -- # decimal 1 00:06:46.130 05:24:26 version -- scripts/common.sh@353 -- # local d=1 00:06:46.130 05:24:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.130 05:24:26 version -- scripts/common.sh@355 -- # echo 1 00:06:46.130 05:24:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.130 05:24:26 version -- scripts/common.sh@366 -- # decimal 2 00:06:46.130 05:24:26 version -- scripts/common.sh@353 -- # local d=2 00:06:46.130 05:24:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.130 05:24:26 version -- scripts/common.sh@355 -- # echo 2 00:06:46.130 05:24:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.130 05:24:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.130 05:24:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.130 05:24:26 version -- scripts/common.sh@368 -- # return 0 00:06:46.130 05:24:26 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.130 05:24:26 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.130 --rc genhtml_branch_coverage=1 00:06:46.130 --rc genhtml_function_coverage=1 00:06:46.130 --rc genhtml_legend=1 00:06:46.130 --rc geninfo_all_blocks=1 00:06:46.130 --rc geninfo_unexecuted_blocks=1 00:06:46.130 00:06:46.130 ' 00:06:46.130 05:24:26 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.130 --rc genhtml_branch_coverage=1 00:06:46.130 --rc genhtml_function_coverage=1 00:06:46.130 --rc genhtml_legend=1 00:06:46.130 --rc geninfo_all_blocks=1 00:06:46.130 --rc geninfo_unexecuted_blocks=1 00:06:46.130 00:06:46.130 ' 00:06:46.130 05:24:26 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.130 --rc genhtml_branch_coverage=1 00:06:46.130 --rc genhtml_function_coverage=1 00:06:46.130 --rc genhtml_legend=1 00:06:46.130 --rc geninfo_all_blocks=1 00:06:46.130 --rc geninfo_unexecuted_blocks=1 00:06:46.130 00:06:46.130 ' 00:06:46.130 05:24:26 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.130 --rc genhtml_branch_coverage=1 00:06:46.130 --rc genhtml_function_coverage=1 00:06:46.130 --rc genhtml_legend=1 00:06:46.130 --rc geninfo_all_blocks=1 00:06:46.130 --rc geninfo_unexecuted_blocks=1 00:06:46.130 00:06:46.130 ' 00:06:46.130 05:24:26 version -- app/version.sh@17 -- # get_header_version major 00:06:46.130 05:24:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # cut -f2 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.130 05:24:26 version -- app/version.sh@17 -- # major=25 00:06:46.130 05:24:26 version -- app/version.sh@18 -- # get_header_version minor 00:06:46.130 05:24:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # cut -f2 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.130 05:24:26 version -- app/version.sh@18 -- # minor=1 00:06:46.130 05:24:26 version -- app/version.sh@19 -- # get_header_version patch 00:06:46.130 05:24:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # cut -f2 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.130 05:24:26 version -- app/version.sh@19 -- # patch=0 00:06:46.130 05:24:26 version -- app/version.sh@20 -- # get_header_version suffix 00:06:46.130 05:24:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # cut -f2 00:06:46.130 05:24:26 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.130 05:24:26 version -- app/version.sh@20 -- # suffix=-pre 00:06:46.130 05:24:26 version -- app/version.sh@22 -- # version=25.1 00:06:46.130 05:24:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:46.130 05:24:26 version -- app/version.sh@28 -- # version=25.1rc0 00:06:46.130 05:24:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:46.130 05:24:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:46.130 05:24:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:46.130 05:24:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:46.130 00:06:46.130 real 0m0.240s 00:06:46.130 user 0m0.151s 00:06:46.130 sys 0m0.124s 00:06:46.130 ************************************ 00:06:46.130 END TEST version 00:06:46.130 ************************************ 00:06:46.130 05:24:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.130 05:24:26 version -- common/autotest_common.sh@10 -- # set +x 00:06:46.130 05:24:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:46.130 05:24:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:46.130 05:24:26 -- spdk/autotest.sh@194 -- # uname -s 00:06:46.130 05:24:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:46.130 05:24:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.130 05:24:26 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:46.130 05:24:26 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:46.130 05:24:26 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:46.130 05:24:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.130 05:24:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.130 05:24:26 -- common/autotest_common.sh@10 -- # set +x 00:06:46.130 ************************************ 00:06:46.130 START TEST spdk_dd 00:06:46.130 ************************************ 00:06:46.130 05:24:26 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:46.130 * Looking for test storage... 00:06:46.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:46.131 05:24:26 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.131 05:24:26 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.131 05:24:26 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.390 05:24:26 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:46.390 05:24:26 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.390 05:24:26 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.390 --rc genhtml_branch_coverage=1 00:06:46.390 --rc genhtml_function_coverage=1 00:06:46.390 --rc genhtml_legend=1 00:06:46.390 --rc geninfo_all_blocks=1 00:06:46.390 --rc geninfo_unexecuted_blocks=1 00:06:46.390 00:06:46.390 ' 00:06:46.390 05:24:26 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.390 --rc genhtml_branch_coverage=1 00:06:46.390 --rc genhtml_function_coverage=1 00:06:46.390 --rc genhtml_legend=1 00:06:46.390 --rc geninfo_all_blocks=1 00:06:46.390 --rc geninfo_unexecuted_blocks=1 00:06:46.390 00:06:46.390 ' 00:06:46.390 05:24:26 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.390 --rc genhtml_branch_coverage=1 00:06:46.390 --rc genhtml_function_coverage=1 00:06:46.390 --rc genhtml_legend=1 00:06:46.390 --rc geninfo_all_blocks=1 00:06:46.390 --rc geninfo_unexecuted_blocks=1 00:06:46.390 00:06:46.390 ' 00:06:46.390 05:24:26 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.390 --rc genhtml_branch_coverage=1 00:06:46.390 --rc genhtml_function_coverage=1 00:06:46.390 --rc genhtml_legend=1 00:06:46.390 --rc geninfo_all_blocks=1 00:06:46.390 --rc geninfo_unexecuted_blocks=1 00:06:46.390 00:06:46.390 ' 00:06:46.390 05:24:26 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.390 05:24:26 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.390 05:24:26 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.390 05:24:26 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.390 05:24:26 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.390 05:24:26 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:46.390 05:24:26 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.390 05:24:26 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:46.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.650 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.650 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.650 05:24:26 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:46.650 05:24:26 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:46.650 05:24:26 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:46.650 05:24:26 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:46.650 05:24:26 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:46.650 05:24:26 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:46.650 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.650 05:24:26 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.650 05:24:26 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.911 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:46.912 * spdk_dd linked to liburing 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:46.912 05:24:26 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:46.912 05:24:26 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:46.913 05:24:26 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:46.913 05:24:26 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:46.913 05:24:26 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:46.913 05:24:26 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:46.913 05:24:26 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:46.913 05:24:26 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:46.913 05:24:26 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:46.913 05:24:26 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:46.913 05:24:26 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.913 05:24:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:46.913 ************************************ 00:06:46.913 START TEST spdk_dd_basic_rw 00:06:46.913 ************************************ 00:06:46.913 05:24:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:46.913 * Looking for test storage... 00:06:46.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.913 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.173 --rc genhtml_branch_coverage=1 00:06:47.173 --rc genhtml_function_coverage=1 00:06:47.173 --rc genhtml_legend=1 00:06:47.173 --rc geninfo_all_blocks=1 00:06:47.173 --rc geninfo_unexecuted_blocks=1 00:06:47.173 00:06:47.173 ' 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.173 --rc genhtml_branch_coverage=1 00:06:47.173 --rc genhtml_function_coverage=1 00:06:47.173 --rc genhtml_legend=1 00:06:47.173 --rc geninfo_all_blocks=1 00:06:47.173 --rc geninfo_unexecuted_blocks=1 00:06:47.173 00:06:47.173 ' 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.173 --rc genhtml_branch_coverage=1 00:06:47.173 --rc genhtml_function_coverage=1 00:06:47.173 --rc genhtml_legend=1 00:06:47.173 --rc geninfo_all_blocks=1 00:06:47.173 --rc geninfo_unexecuted_blocks=1 00:06:47.173 00:06:47.173 ' 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.173 --rc genhtml_branch_coverage=1 00:06:47.173 --rc genhtml_function_coverage=1 00:06:47.173 --rc genhtml_legend=1 00:06:47.173 --rc geninfo_all_blocks=1 00:06:47.173 --rc geninfo_unexecuted_blocks=1 00:06:47.173 00:06:47.173 ' 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:47.173 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.435 05:24:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.435 ************************************ 00:06:47.435 START TEST dd_bs_lt_native_bs 00:06:47.435 ************************************ 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.436 05:24:27 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:47.436 { 00:06:47.436 "subsystems": [ 00:06:47.436 { 00:06:47.436 "subsystem": "bdev", 00:06:47.436 "config": [ 00:06:47.436 { 00:06:47.436 "params": { 00:06:47.436 "trtype": "pcie", 00:06:47.436 "traddr": "0000:00:10.0", 00:06:47.436 "name": "Nvme0" 00:06:47.436 }, 00:06:47.436 "method": "bdev_nvme_attach_controller" 00:06:47.436 }, 00:06:47.436 { 00:06:47.436 "method": "bdev_wait_for_examine" 00:06:47.436 } 00:06:47.436 ] 00:06:47.436 } 00:06:47.436 ] 00:06:47.436 } 00:06:47.436 [2024-12-16 05:24:27.595995] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:47.436 [2024-12-16 05:24:27.596378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62728 ] 00:06:47.695 [2024-12-16 05:24:27.782178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.695 [2024-12-16 05:24:27.906999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.954 [2024-12-16 05:24:28.108746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.213 [2024-12-16 05:24:28.300023] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:48.213 [2024-12-16 05:24:28.300127] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.781 [2024-12-16 05:24:28.838550] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:49.040 ************************************ 00:06:49.040 END TEST dd_bs_lt_native_bs 00:06:49.040 ************************************ 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.040 00:06:49.040 real 0m1.585s 00:06:49.040 user 0m1.328s 00:06:49.040 sys 0m0.208s 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.040 ************************************ 00:06:49.040 START TEST dd_rw 00:06:49.040 ************************************ 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:49.040 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:49.041 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.609 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:49.609 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:49.609 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.609 05:24:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.609 { 00:06:49.609 "subsystems": [ 00:06:49.609 { 00:06:49.609 "subsystem": "bdev", 00:06:49.609 "config": [ 00:06:49.609 { 00:06:49.609 "params": { 00:06:49.609 "trtype": "pcie", 00:06:49.609 "traddr": "0000:00:10.0", 00:06:49.609 "name": "Nvme0" 00:06:49.609 }, 00:06:49.609 "method": "bdev_nvme_attach_controller" 00:06:49.609 }, 00:06:49.609 { 00:06:49.609 "method": "bdev_wait_for_examine" 00:06:49.609 } 00:06:49.609 ] 00:06:49.609 } 00:06:49.609 ] 00:06:49.609 } 00:06:49.609 [2024-12-16 05:24:29.783763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:49.609 [2024-12-16 05:24:29.784121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:06:49.868 [2024-12-16 05:24:29.964214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.868 [2024-12-16 05:24:30.052534] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.127 [2024-12-16 05:24:30.210147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.127  [2024-12-16T05:24:31.323Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:51.064 00:06:51.323 05:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:51.323 05:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:51.323 05:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.323 05:24:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.323 { 00:06:51.323 "subsystems": [ 00:06:51.323 { 00:06:51.323 "subsystem": "bdev", 00:06:51.323 "config": [ 00:06:51.323 { 00:06:51.323 "params": { 00:06:51.323 "trtype": "pcie", 00:06:51.323 "traddr": "0000:00:10.0", 00:06:51.323 "name": "Nvme0" 00:06:51.323 }, 00:06:51.323 "method": "bdev_nvme_attach_controller" 00:06:51.323 }, 00:06:51.323 { 00:06:51.323 "method": "bdev_wait_for_examine" 00:06:51.323 } 00:06:51.323 ] 00:06:51.323 } 00:06:51.323 ] 00:06:51.323 } 00:06:51.323 [2024-12-16 05:24:31.428740] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:51.323 [2024-12-16 05:24:31.428931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62796 ] 00:06:51.582 [2024-12-16 05:24:31.604986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.582 [2024-12-16 05:24:31.687799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.862 [2024-12-16 05:24:31.842698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.862  [2024-12-16T05:24:33.099Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:52.840 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.840 05:24:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.840 { 00:06:52.840 "subsystems": [ 00:06:52.840 { 00:06:52.840 "subsystem": "bdev", 00:06:52.840 "config": [ 00:06:52.840 { 00:06:52.840 "params": { 00:06:52.840 "trtype": "pcie", 00:06:52.840 "traddr": "0000:00:10.0", 00:06:52.840 "name": "Nvme0" 00:06:52.840 }, 00:06:52.840 "method": "bdev_nvme_attach_controller" 00:06:52.840 }, 00:06:52.840 { 00:06:52.840 "method": "bdev_wait_for_examine" 00:06:52.840 } 00:06:52.840 ] 00:06:52.840 } 00:06:52.840 ] 00:06:52.840 } 00:06:52.840 [2024-12-16 05:24:32.851371] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:52.840 [2024-12-16 05:24:32.851538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62824 ] 00:06:52.840 [2024-12-16 05:24:33.031514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.099 [2024-12-16 05:24:33.123663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.099 [2024-12-16 05:24:33.268823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.359  [2024-12-16T05:24:34.555Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:54.296 00:06:54.296 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:54.296 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:54.296 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:54.296 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:54.296 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:54.296 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:54.296 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.863 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:54.863 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:54.863 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:54.863 05:24:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.863 { 00:06:54.863 "subsystems": [ 00:06:54.863 { 00:06:54.863 "subsystem": "bdev", 00:06:54.863 "config": [ 00:06:54.863 { 00:06:54.863 "params": { 00:06:54.863 "trtype": "pcie", 00:06:54.863 "traddr": "0000:00:10.0", 00:06:54.863 "name": "Nvme0" 00:06:54.863 }, 00:06:54.863 "method": "bdev_nvme_attach_controller" 00:06:54.863 }, 00:06:54.863 { 00:06:54.863 "method": "bdev_wait_for_examine" 00:06:54.863 } 00:06:54.863 ] 00:06:54.864 } 00:06:54.864 ] 00:06:54.864 } 00:06:54.864 [2024-12-16 05:24:35.065269] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:54.864 [2024-12-16 05:24:35.065431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62855 ] 00:06:55.122 [2024-12-16 05:24:35.246700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.122 [2024-12-16 05:24:35.356744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.381 [2024-12-16 05:24:35.552139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.640  [2024-12-16T05:24:36.836Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:56.577 00:06:56.577 05:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:56.577 05:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:56.577 05:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:56.577 05:24:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:56.577 { 00:06:56.577 "subsystems": [ 00:06:56.577 { 00:06:56.577 "subsystem": "bdev", 00:06:56.577 "config": [ 00:06:56.577 { 00:06:56.577 "params": { 00:06:56.577 "trtype": "pcie", 00:06:56.577 "traddr": "0000:00:10.0", 00:06:56.577 "name": "Nvme0" 00:06:56.577 }, 00:06:56.577 "method": "bdev_nvme_attach_controller" 00:06:56.577 }, 00:06:56.577 { 00:06:56.577 "method": "bdev_wait_for_examine" 00:06:56.577 } 00:06:56.577 ] 00:06:56.577 } 00:06:56.577 ] 00:06:56.577 } 00:06:56.577 [2024-12-16 05:24:36.680724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:56.577 [2024-12-16 05:24:36.680913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62886 ] 00:06:56.837 [2024-12-16 05:24:36.853408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.837 [2024-12-16 05:24:36.959896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.095 [2024-12-16 05:24:37.151698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.096  [2024-12-16T05:24:38.732Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:58.473 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.473 05:24:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.473 { 00:06:58.473 "subsystems": [ 00:06:58.473 { 00:06:58.473 "subsystem": "bdev", 00:06:58.473 "config": [ 00:06:58.473 { 00:06:58.473 "params": { 00:06:58.473 "trtype": "pcie", 00:06:58.473 "traddr": "0000:00:10.0", 00:06:58.473 "name": "Nvme0" 00:06:58.473 }, 00:06:58.473 "method": "bdev_nvme_attach_controller" 00:06:58.473 }, 00:06:58.473 { 00:06:58.473 "method": "bdev_wait_for_examine" 00:06:58.473 } 00:06:58.473 ] 00:06:58.473 } 00:06:58.473 ] 00:06:58.473 } 00:06:58.473 [2024-12-16 05:24:38.507455] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:58.473 [2024-12-16 05:24:38.507667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62908 ] 00:06:58.473 [2024-12-16 05:24:38.688871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.733 [2024-12-16 05:24:38.798609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.991 [2024-12-16 05:24:38.993292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.991  [2024-12-16T05:24:40.186Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:59.927 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:59.927 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.495 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:00.495 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:00.495 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.495 05:24:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.495 { 00:07:00.495 "subsystems": [ 00:07:00.495 { 00:07:00.495 "subsystem": "bdev", 00:07:00.495 "config": [ 00:07:00.495 { 00:07:00.495 "params": { 00:07:00.495 "trtype": "pcie", 00:07:00.495 "traddr": "0000:00:10.0", 00:07:00.495 "name": "Nvme0" 00:07:00.495 }, 00:07:00.495 "method": "bdev_nvme_attach_controller" 00:07:00.495 }, 00:07:00.495 { 00:07:00.495 "method": "bdev_wait_for_examine" 00:07:00.495 } 00:07:00.495 ] 00:07:00.495 } 00:07:00.495 ] 00:07:00.495 } 00:07:00.754 [2024-12-16 05:24:40.763717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:00.754 [2024-12-16 05:24:40.763879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62945 ] 00:07:00.754 [2024-12-16 05:24:40.948953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.013 [2024-12-16 05:24:41.071776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.272 [2024-12-16 05:24:41.281191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.272  [2024-12-16T05:24:42.911Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:02.652 00:07:02.652 05:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:02.652 05:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:02.652 05:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.652 05:24:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.652 { 00:07:02.652 "subsystems": [ 00:07:02.652 { 00:07:02.652 "subsystem": "bdev", 00:07:02.652 "config": [ 00:07:02.652 { 00:07:02.652 "params": { 00:07:02.652 "trtype": "pcie", 00:07:02.652 "traddr": "0000:00:10.0", 00:07:02.652 "name": "Nvme0" 00:07:02.652 }, 00:07:02.652 "method": "bdev_nvme_attach_controller" 00:07:02.652 }, 00:07:02.652 { 00:07:02.652 "method": "bdev_wait_for_examine" 00:07:02.652 } 00:07:02.652 ] 00:07:02.652 } 00:07:02.652 ] 00:07:02.652 } 00:07:02.652 [2024-12-16 05:24:42.644804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:02.652 [2024-12-16 05:24:42.644980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62970 ] 00:07:02.652 [2024-12-16 05:24:42.826796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.911 [2024-12-16 05:24:42.937276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.911 [2024-12-16 05:24:43.127811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.170  [2024-12-16T05:24:44.390Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:04.131 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.131 05:24:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.131 { 00:07:04.131 "subsystems": [ 00:07:04.131 { 00:07:04.131 "subsystem": "bdev", 00:07:04.131 "config": [ 00:07:04.131 { 00:07:04.131 "params": { 00:07:04.131 "trtype": "pcie", 00:07:04.131 "traddr": "0000:00:10.0", 00:07:04.131 "name": "Nvme0" 00:07:04.131 }, 00:07:04.131 "method": "bdev_nvme_attach_controller" 00:07:04.131 }, 00:07:04.131 { 00:07:04.131 "method": "bdev_wait_for_examine" 00:07:04.131 } 00:07:04.131 ] 00:07:04.131 } 00:07:04.131 ] 00:07:04.131 } 00:07:04.131 [2024-12-16 05:24:44.295637] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:04.131 [2024-12-16 05:24:44.296016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63006 ] 00:07:04.390 [2024-12-16 05:24:44.481662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.390 [2024-12-16 05:24:44.594578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.649 [2024-12-16 05:24:44.790121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.908  [2024-12-16T05:24:46.103Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:05.844 00:07:05.844 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:05.844 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:05.844 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:05.844 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:05.844 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:05.844 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:05.844 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.779 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:06.779 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:06.779 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.779 05:24:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.779 { 00:07:06.779 "subsystems": [ 00:07:06.779 { 00:07:06.779 "subsystem": "bdev", 00:07:06.779 "config": [ 00:07:06.779 { 00:07:06.779 "params": { 00:07:06.779 "trtype": "pcie", 00:07:06.779 "traddr": "0000:00:10.0", 00:07:06.779 "name": "Nvme0" 00:07:06.779 }, 00:07:06.779 "method": "bdev_nvme_attach_controller" 00:07:06.779 }, 00:07:06.779 { 00:07:06.779 "method": "bdev_wait_for_examine" 00:07:06.779 } 00:07:06.779 ] 00:07:06.779 } 00:07:06.779 ] 00:07:06.779 } 00:07:06.779 [2024-12-16 05:24:46.782609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:06.779 [2024-12-16 05:24:46.782958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63037 ] 00:07:06.779 [2024-12-16 05:24:46.960669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.038 [2024-12-16 05:24:47.045104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.038 [2024-12-16 05:24:47.204024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.296  [2024-12-16T05:24:48.122Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:07.863 00:07:07.863 05:24:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:07.863 05:24:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:07.863 05:24:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.863 05:24:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.122 { 00:07:08.122 "subsystems": [ 00:07:08.122 { 00:07:08.122 "subsystem": "bdev", 00:07:08.122 "config": [ 00:07:08.122 { 00:07:08.122 "params": { 00:07:08.122 "trtype": "pcie", 00:07:08.122 "traddr": "0000:00:10.0", 00:07:08.122 "name": "Nvme0" 00:07:08.122 }, 00:07:08.122 "method": "bdev_nvme_attach_controller" 00:07:08.122 }, 00:07:08.122 { 00:07:08.122 "method": "bdev_wait_for_examine" 00:07:08.122 } 00:07:08.122 ] 00:07:08.122 } 00:07:08.122 ] 00:07:08.122 } 00:07:08.122 [2024-12-16 05:24:48.191911] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:08.122 [2024-12-16 05:24:48.192095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63063 ] 00:07:08.122 [2024-12-16 05:24:48.369674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.381 [2024-12-16 05:24:48.451995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.381 [2024-12-16 05:24:48.597784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.639  [2024-12-16T05:24:49.834Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:09.575 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.575 05:24:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.575 { 00:07:09.575 "subsystems": [ 00:07:09.575 { 00:07:09.575 "subsystem": "bdev", 00:07:09.575 "config": [ 00:07:09.575 { 00:07:09.575 "params": { 00:07:09.575 "trtype": "pcie", 00:07:09.575 "traddr": "0000:00:10.0", 00:07:09.575 "name": "Nvme0" 00:07:09.575 }, 00:07:09.575 "method": "bdev_nvme_attach_controller" 00:07:09.575 }, 00:07:09.575 { 00:07:09.575 "method": "bdev_wait_for_examine" 00:07:09.575 } 00:07:09.575 ] 00:07:09.575 } 00:07:09.575 ] 00:07:09.575 } 00:07:09.575 [2024-12-16 05:24:49.735239] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:09.575 [2024-12-16 05:24:49.735423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63090 ] 00:07:09.834 [2024-12-16 05:24:49.916476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.834 [2024-12-16 05:24:50.006964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.091 [2024-12-16 05:24:50.157109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.091  [2024-12-16T05:24:51.286Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:11.027 00:07:11.027 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:11.027 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:11.027 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:11.027 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:11.028 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:11.028 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:11.028 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:11.028 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.286 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:11.286 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:11.286 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.286 05:24:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.286 { 00:07:11.286 "subsystems": [ 00:07:11.286 { 00:07:11.286 "subsystem": "bdev", 00:07:11.286 "config": [ 00:07:11.286 { 00:07:11.286 "params": { 00:07:11.286 "trtype": "pcie", 00:07:11.286 "traddr": "0000:00:10.0", 00:07:11.286 "name": "Nvme0" 00:07:11.286 }, 00:07:11.286 "method": "bdev_nvme_attach_controller" 00:07:11.286 }, 00:07:11.286 { 00:07:11.286 "method": "bdev_wait_for_examine" 00:07:11.286 } 00:07:11.286 ] 00:07:11.286 } 00:07:11.286 ] 00:07:11.286 } 00:07:11.545 [2024-12-16 05:24:51.567199] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:11.545 [2024-12-16 05:24:51.567386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63121 ] 00:07:11.545 [2024-12-16 05:24:51.745455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.803 [2024-12-16 05:24:51.830976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.803 [2024-12-16 05:24:51.987238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.061  [2024-12-16T05:24:53.255Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:12.996 00:07:12.996 05:24:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:12.996 05:24:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:12.996 05:24:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.996 05:24:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.996 { 00:07:12.996 "subsystems": [ 00:07:12.996 { 00:07:12.996 "subsystem": "bdev", 00:07:12.996 "config": [ 00:07:12.996 { 00:07:12.996 "params": { 00:07:12.996 "trtype": "pcie", 00:07:12.996 "traddr": "0000:00:10.0", 00:07:12.996 "name": "Nvme0" 00:07:12.996 }, 00:07:12.996 "method": "bdev_nvme_attach_controller" 00:07:12.996 }, 00:07:12.996 { 00:07:12.996 "method": "bdev_wait_for_examine" 00:07:12.996 } 00:07:12.996 ] 00:07:12.996 } 00:07:12.996 ] 00:07:12.996 } 00:07:12.996 [2024-12-16 05:24:53.111857] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:12.996 [2024-12-16 05:24:53.112030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63147 ] 00:07:13.256 [2024-12-16 05:24:53.288623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.257 [2024-12-16 05:24:53.374609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.516 [2024-12-16 05:24:53.551447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.516  [2024-12-16T05:24:54.710Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:14.451 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.451 05:24:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.451 { 00:07:14.451 "subsystems": [ 00:07:14.451 { 00:07:14.451 "subsystem": "bdev", 00:07:14.451 "config": [ 00:07:14.451 { 00:07:14.451 "params": { 00:07:14.451 "trtype": "pcie", 00:07:14.451 "traddr": "0000:00:10.0", 00:07:14.451 "name": "Nvme0" 00:07:14.451 }, 00:07:14.451 "method": "bdev_nvme_attach_controller" 00:07:14.451 }, 00:07:14.451 { 00:07:14.451 "method": "bdev_wait_for_examine" 00:07:14.451 } 00:07:14.451 ] 00:07:14.451 } 00:07:14.451 ] 00:07:14.451 } 00:07:14.451 [2024-12-16 05:24:54.571873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:14.451 [2024-12-16 05:24:54.572053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63173 ] 00:07:14.710 [2024-12-16 05:24:54.743928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.710 [2024-12-16 05:24:54.825390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.980 [2024-12-16 05:24:54.983056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.980  [2024-12-16T05:24:56.185Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:15.926 00:07:15.926 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:15.926 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:15.926 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:15.926 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:15.926 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:15.926 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:15.926 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.493 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:16.493 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:16.493 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.493 05:24:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.493 { 00:07:16.493 "subsystems": [ 00:07:16.493 { 00:07:16.493 "subsystem": "bdev", 00:07:16.493 "config": [ 00:07:16.493 { 00:07:16.493 "params": { 00:07:16.493 "trtype": "pcie", 00:07:16.493 "traddr": "0000:00:10.0", 00:07:16.493 "name": "Nvme0" 00:07:16.493 }, 00:07:16.493 "method": "bdev_nvme_attach_controller" 00:07:16.493 }, 00:07:16.493 { 00:07:16.493 "method": "bdev_wait_for_examine" 00:07:16.493 } 00:07:16.493 ] 00:07:16.493 } 00:07:16.493 ] 00:07:16.493 } 00:07:16.493 [2024-12-16 05:24:56.695627] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:16.493 [2024-12-16 05:24:56.695808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63204 ] 00:07:16.752 [2024-12-16 05:24:56.861545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.752 [2024-12-16 05:24:56.946807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.011 [2024-12-16 05:24:57.100991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.011  [2024-12-16T05:24:58.205Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:17.946 00:07:17.946 05:24:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:17.946 05:24:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:17.946 05:24:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.947 05:24:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.947 { 00:07:17.947 "subsystems": [ 00:07:17.947 { 00:07:17.947 "subsystem": "bdev", 00:07:17.947 "config": [ 00:07:17.947 { 00:07:17.947 "params": { 00:07:17.947 "trtype": "pcie", 00:07:17.947 "traddr": "0000:00:10.0", 00:07:17.947 "name": "Nvme0" 00:07:17.947 }, 00:07:17.947 "method": "bdev_nvme_attach_controller" 00:07:17.947 }, 00:07:17.947 { 00:07:17.947 "method": "bdev_wait_for_examine" 00:07:17.947 } 00:07:17.947 ] 00:07:17.947 } 00:07:17.947 ] 00:07:17.947 } 00:07:17.947 [2024-12-16 05:24:58.064371] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:17.947 [2024-12-16 05:24:58.064541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63225 ] 00:07:18.205 [2024-12-16 05:24:58.239078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.205 [2024-12-16 05:24:58.333307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.464 [2024-12-16 05:24:58.501901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.464  [2024-12-16T05:24:59.657Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:19.398 00:07:19.398 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.398 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.399 05:24:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.399 { 00:07:19.399 "subsystems": [ 00:07:19.399 { 00:07:19.399 "subsystem": "bdev", 00:07:19.399 "config": [ 00:07:19.399 { 00:07:19.399 "params": { 00:07:19.399 "trtype": "pcie", 00:07:19.399 "traddr": "0000:00:10.0", 00:07:19.399 "name": "Nvme0" 00:07:19.399 }, 00:07:19.399 "method": "bdev_nvme_attach_controller" 00:07:19.399 }, 00:07:19.399 { 00:07:19.399 "method": "bdev_wait_for_examine" 00:07:19.399 } 00:07:19.399 ] 00:07:19.399 } 00:07:19.399 ] 00:07:19.399 } 00:07:19.399 [2024-12-16 05:24:59.636269] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:19.399 [2024-12-16 05:24:59.636444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63253 ] 00:07:19.657 [2024-12-16 05:24:59.812213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.657 [2024-12-16 05:24:59.892284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.916 [2024-12-16 05:25:00.048183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.174  [2024-12-16T05:25:01.000Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:20.741 00:07:20.741 ************************************ 00:07:20.741 END TEST dd_rw 00:07:20.741 ************************************ 00:07:20.741 00:07:20.741 real 0m31.769s 00:07:20.741 user 0m26.744s 00:07:20.741 sys 0m14.870s 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.741 ************************************ 00:07:20.741 START TEST dd_rw_offset 00:07:20.741 ************************************ 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=6hgmq5qlk0af3z74jpvt057mmj78uh1sornhj3xd5mcntoexzh9qhu9suytdqpowq2itemgjvl9hy9it3st0votq912u9at2n6mo8dmvq2uu3k276v8eehcy9evx7i2qsc7t9r8ev6ar09pxd9fb7vja4fvobsh3dus43emuls7ipl24swca5ex6agsavunlovwchto72qxxi9jyyyzmuvvh2uclnvfd7dit946fjxj1e3wksn2i55h7fg3kidk6a0qtbjstvf8jxge2wia53ffxqaajal9545pjrxfr03kle7eke6a95z3r0fngre80kpuoax520tvlstz9q73bfeiyjfzxt6usqgk0dwwfve3hvgzde7paiq32t38b730ua4k3fr69ls59bpau3utqdft2w37zicxs3tn6vprr420rrp6azkpxem6v13lnjg09v75r1qzijz13poohqnwlg0gry6pqkxu9z6twdv0j4hebugnzvrs9yhxq434bqeeh72u7eeo4cxmo35ewftx90eyc9b08nh45uma4ktndiqwdo0s4v5d5mw728agfjnckhjmn0zvituzn431r65zf7yyu3xfwwqufv626eiff5k2gp6eii2e1waiqgw6tvq0dln4nxxzf78xj9tpetr2478binz0pozp5d1qtxe1p33aw1owqug7kcbk9a9u40jua8dqqiwv9nrupimyuk047xhq28f5ptbfxuyt6e0ncgc3ilakbpana56yuwiwwoz21hqbjmj2ercxwdl0ogucg3mr2ik144tzaa7youp9313sb9l9f54fbsz99achoym2gt17r35qhx7pqx2aa81r5llu7ghpvgarz35najmfkkk3tyd1dwtbfdaxf9zms8tmo4sbc03xdwnhrbbirwxk5snbgywwuada0p06pva8b38cw2usfqtqzbsz5pumlb2w1xfdhsyvli7oz4skcodmvcpl43vs6zhnz4wchjveutjckqwc3gpxgl8rfy71d0j617kdhlz9w6ner80o5otd07m0l5kn4ig9rr1ag4ahairqmnyjtorpxfgf797n8e92p4su9w9wrm45fxtyyl6p7mcic1ai1r7xswfkmtxex7p8i5rjy9qw10dt43kpmchu94qnaeae5qhn3cjzgfq7r8qx2weeevro3ps4lfu4pquj1r2nsa3tul25u8epxoly2ykw4yu1946zpc4kj4g50lkqxs8elhp8xkqqiyze062mcbehv36h6l8uv4j9kfx3x5ysyyjee88yc9j5grxbdphumo13szl41cf9f5gxyeqx8sfj7dhbd35n2x2vzwzspuu8s54id0opg19ajbvcheef8yjy0j80m855xhsfd1vlkn4rtqdpzua7tcwzbfnpl9nyq8tgi9wq745apnuj8o9vijs52ectntv772i9pkj0mdyv2eqtx90rl9ebxilbvuinv30y393byk71vcpm86xdcar97nrv2csww4ucgcry757dxjcr6zwnrwc8c3hzm9860h898su0msleeybn3bs9wiae38wom2hzu21aj2n70yz2mf0dxpvbpau8gxvyd21yz5m34y892wp9we3qegnxd4fbzvyexwtlkaado65fvjm27jh8xggp0e2imyew55zb53jh8ryn5bjpermcv2tjeksb6i083nyjvc98mw7sdhnsggsz4zo910x2jnuew7d2ce9jkwdfvi8rpqvs9j237lui89symg9o87543n18rsjamktss7i8lawr56ygrozer8w637hgaqt4kahb0nk51dpmhl42g25lzusmxbfpjs38eapmc7jwr0c3mb0w0woxosx8t4bajkh27jdf7ex9sk77owwqpdeqcv0tiqnvjh7g4vkb7t82y61d6zdw0jn80vkg0o04ft3avl09xa5xq9wtqq43llu2an5fuy2dny84lw6kpv0nw6djdn5kipwif51w39gd40wrebxovxrbheqkkhurs523clnhc0k2x6mqb2z4ivv2txfgc5da4zjaqbnen265gmccrkuvnqiswbyab03jz4owub4e2x4u9u9e0p91evunhtg7mvpwtw79khmgxfxkndk0en1nenmk10s23cnfi2smy9xjovupo24yyxqj53vkmhnaeq1a1x9ke5utfd0itrt0vzzr5dsh5mf8k88jpzeopc3mo0vyia18z2zdliqx7jiaoxyi50nqcr09ppd77g601iw4f4kwieysjoay0b06s9a1gdckr1hqly52vq5uf5ir9u8fdmvpmg6mil924bhnmib39pzzf9o1sp3fv8p8cx8m9ezbcbbsfg9qwmln10aqh1wdhzbtc36un2w5xdidw1dt4cu6fguhmdejtffarcqjxor8zn92zjbru1l197ni9kxt56gr4ofrc3jj2opby4x6hw53nctmandwlvwrltryz4osls2ce2qzsyfdhny88zdgq6lo8den9l6ob28507jvbop3h0qh6vvdxq63o5z6mkmch9oyu7ovgvgr6ygfgmgbi5cmuaofe8m10vnf9tfxactqxxby3xx0o11tk33dvriq74s5no83ctv43g6ok2rivcvbii3wuxj9wkw5n2vniu71y3gm5h6ibqfcewoe25q1qds56q7ucbqrf1w86ouoqbzt4fpurjrb39ze0nmnwellqbhwbcvoal41lc48rhib72m3wf4l4kchapvd7tofnwxv4g8r1g7fobtoxj2eyv7yvo6ov8n1lgum6zsb9mx8cz3vvgss6zsten3kpnzxjpy8zowesf58disdetwm2al1u09dhqiucmfiqthz2jxgzg2buhyvh9cp15d14qwr63x2z553mu7gsnf8ndb58ni2axv02wql81x2zyaaysz619mc8ocsxi4mr6r7k26pkxsjmc93hsynk7kvywhitsbxzctxkjne3t3ibv450qxza81ju8q1iionm6sz320fp10skk8j3z2ozx56gqq9doxqn9fq884urhw4lbgm7puwmup562dubu2ihg48ig7k85l3n0ir054phydwx7qkkdgxq2bo9on5ee9rg2hxpnzp7p5dijw0wh4mjg1p8ukmd4nldv347py1of81t2uzpxidkypg3srudpfxw74wxlfi0jagilflyfx25ea1vmm864ht46089ddwn17nv8dav8jbojc4dob1c3xdc19h1ilinlfsxnmnxwxw8j8f6h1twlsphncat35a6b258vikyd0ylfubx11gku305er3rslksdcop0mqsotx1phw30oz31ydxhbztr1v7r9doj2p2y1q1xpqmz1cjpvlxi81bznmi7guxfwaojy11w9crokwnd93of1e86x3xz4pr558ogdsd54fph5c5b5cc7thw3ud16ffqvv0vsjp07qa7e46ym47rhnuuchcpm3j5p3ozvu6nx9r8tvbiudrro2wzqmp0gc663zlmx0pj88wqp58mgqwkzmpf3x3fb81gopygder0r1d1wkcgy1l0tyqdcr244rrf7u8u94wm8gzoomlugi4c2i1kxfmltcbb27ox9sdg57fqgouap4tdjsnr8gta0y6xfl64hfkmx3wbwbkn6dyzog7dfbu5utxly39ajvf2qrcjk8z0of7ds63cr0r3w3vdldezxayg71iwfhy2vpjluya7zpk6y5mvchbxaore5hhomcy0psauj0suj4lpwhq94tjhz1flh1rcloksys13oz58a90ea7zbellx1cbicsq2tpfir3octrbns2t4qvnqo7lqdj7aba7eja5spe6skhhluoiggnt1e4xyg34eeofnaj1wf9b260ivka529r85gt6hc9femg1bkmn2kf9884ur8tdalkpud3p39shz61ldsxc96vc9gc9982knvod8o35fuycej4x9wfjj4yijjp9ddclurfpvh4fld46k8hyf149trb62j03k7a94ivx70q7s2aypaera9iqg9yishgpz32kus2fporo80u3vs0wtta0bz9tmly5218f4bpsts1nz6lxvhexicg4i 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:20.741 05:25:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:21.000 { 00:07:21.000 "subsystems": [ 00:07:21.000 { 00:07:21.000 "subsystem": "bdev", 00:07:21.000 "config": [ 00:07:21.000 { 00:07:21.000 "params": { 00:07:21.000 "trtype": "pcie", 00:07:21.000 "traddr": "0000:00:10.0", 00:07:21.000 "name": "Nvme0" 00:07:21.000 }, 00:07:21.000 "method": "bdev_nvme_attach_controller" 00:07:21.000 }, 00:07:21.000 { 00:07:21.000 "method": "bdev_wait_for_examine" 00:07:21.000 } 00:07:21.000 ] 00:07:21.000 } 00:07:21.000 ] 00:07:21.000 } 00:07:21.000 [2024-12-16 05:25:01.094318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:21.000 [2024-12-16 05:25:01.094485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63295 ] 00:07:21.258 [2024-12-16 05:25:01.272707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.258 [2024-12-16 05:25:01.356745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.258 [2024-12-16 05:25:01.499723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.517  [2024-12-16T05:25:02.712Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:22.453 00:07:22.453 05:25:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:22.453 05:25:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:22.453 05:25:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:22.453 05:25:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:22.453 { 00:07:22.453 "subsystems": [ 00:07:22.453 { 00:07:22.453 "subsystem": "bdev", 00:07:22.453 "config": [ 00:07:22.453 { 00:07:22.453 "params": { 00:07:22.453 "trtype": "pcie", 00:07:22.453 "traddr": "0000:00:10.0", 00:07:22.453 "name": "Nvme0" 00:07:22.453 }, 00:07:22.453 "method": "bdev_nvme_attach_controller" 00:07:22.453 }, 00:07:22.453 { 00:07:22.453 "method": "bdev_wait_for_examine" 00:07:22.453 } 00:07:22.453 ] 00:07:22.453 } 00:07:22.453 ] 00:07:22.453 } 00:07:22.453 [2024-12-16 05:25:02.668000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:22.453 [2024-12-16 05:25:02.668177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63321 ] 00:07:22.712 [2024-12-16 05:25:02.846485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.712 [2024-12-16 05:25:02.940763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.970 [2024-12-16 05:25:03.105309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.229  [2024-12-16T05:25:04.425Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:24.166 00:07:24.166 05:25:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 6hgmq5qlk0af3z74jpvt057mmj78uh1sornhj3xd5mcntoexzh9qhu9suytdqpowq2itemgjvl9hy9it3st0votq912u9at2n6mo8dmvq2uu3k276v8eehcy9evx7i2qsc7t9r8ev6ar09pxd9fb7vja4fvobsh3dus43emuls7ipl24swca5ex6agsavunlovwchto72qxxi9jyyyzmuvvh2uclnvfd7dit946fjxj1e3wksn2i55h7fg3kidk6a0qtbjstvf8jxge2wia53ffxqaajal9545pjrxfr03kle7eke6a95z3r0fngre80kpuoax520tvlstz9q73bfeiyjfzxt6usqgk0dwwfve3hvgzde7paiq32t38b730ua4k3fr69ls59bpau3utqdft2w37zicxs3tn6vprr420rrp6azkpxem6v13lnjg09v75r1qzijz13poohqnwlg0gry6pqkxu9z6twdv0j4hebugnzvrs9yhxq434bqeeh72u7eeo4cxmo35ewftx90eyc9b08nh45uma4ktndiqwdo0s4v5d5mw728agfjnckhjmn0zvituzn431r65zf7yyu3xfwwqufv626eiff5k2gp6eii2e1waiqgw6tvq0dln4nxxzf78xj9tpetr2478binz0pozp5d1qtxe1p33aw1owqug7kcbk9a9u40jua8dqqiwv9nrupimyuk047xhq28f5ptbfxuyt6e0ncgc3ilakbpana56yuwiwwoz21hqbjmj2ercxwdl0ogucg3mr2ik144tzaa7youp9313sb9l9f54fbsz99achoym2gt17r35qhx7pqx2aa81r5llu7ghpvgarz35najmfkkk3tyd1dwtbfdaxf9zms8tmo4sbc03xdwnhrbbirwxk5snbgywwuada0p06pva8b38cw2usfqtqzbsz5pumlb2w1xfdhsyvli7oz4skcodmvcpl43vs6zhnz4wchjveutjckqwc3gpxgl8rfy71d0j617kdhlz9w6ner80o5otd07m0l5kn4ig9rr1ag4ahairqmnyjtorpxfgf797n8e92p4su9w9wrm45fxtyyl6p7mcic1ai1r7xswfkmtxex7p8i5rjy9qw10dt43kpmchu94qnaeae5qhn3cjzgfq7r8qx2weeevro3ps4lfu4pquj1r2nsa3tul25u8epxoly2ykw4yu1946zpc4kj4g50lkqxs8elhp8xkqqiyze062mcbehv36h6l8uv4j9kfx3x5ysyyjee88yc9j5grxbdphumo13szl41cf9f5gxyeqx8sfj7dhbd35n2x2vzwzspuu8s54id0opg19ajbvcheef8yjy0j80m855xhsfd1vlkn4rtqdpzua7tcwzbfnpl9nyq8tgi9wq745apnuj8o9vijs52ectntv772i9pkj0mdyv2eqtx90rl9ebxilbvuinv30y393byk71vcpm86xdcar97nrv2csww4ucgcry757dxjcr6zwnrwc8c3hzm9860h898su0msleeybn3bs9wiae38wom2hzu21aj2n70yz2mf0dxpvbpau8gxvyd21yz5m34y892wp9we3qegnxd4fbzvyexwtlkaado65fvjm27jh8xggp0e2imyew55zb53jh8ryn5bjpermcv2tjeksb6i083nyjvc98mw7sdhnsggsz4zo910x2jnuew7d2ce9jkwdfvi8rpqvs9j237lui89symg9o87543n18rsjamktss7i8lawr56ygrozer8w637hgaqt4kahb0nk51dpmhl42g25lzusmxbfpjs38eapmc7jwr0c3mb0w0woxosx8t4bajkh27jdf7ex9sk77owwqpdeqcv0tiqnvjh7g4vkb7t82y61d6zdw0jn80vkg0o04ft3avl09xa5xq9wtqq43llu2an5fuy2dny84lw6kpv0nw6djdn5kipwif51w39gd40wrebxovxrbheqkkhurs523clnhc0k2x6mqb2z4ivv2txfgc5da4zjaqbnen265gmccrkuvnqiswbyab03jz4owub4e2x4u9u9e0p91evunhtg7mvpwtw79khmgxfxkndk0en1nenmk10s23cnfi2smy9xjovupo24yyxqj53vkmhnaeq1a1x9ke5utfd0itrt0vzzr5dsh5mf8k88jpzeopc3mo0vyia18z2zdliqx7jiaoxyi50nqcr09ppd77g601iw4f4kwieysjoay0b06s9a1gdckr1hqly52vq5uf5ir9u8fdmvpmg6mil924bhnmib39pzzf9o1sp3fv8p8cx8m9ezbcbbsfg9qwmln10aqh1wdhzbtc36un2w5xdidw1dt4cu6fguhmdejtffarcqjxor8zn92zjbru1l197ni9kxt56gr4ofrc3jj2opby4x6hw53nctmandwlvwrltryz4osls2ce2qzsyfdhny88zdgq6lo8den9l6ob28507jvbop3h0qh6vvdxq63o5z6mkmch9oyu7ovgvgr6ygfgmgbi5cmuaofe8m10vnf9tfxactqxxby3xx0o11tk33dvriq74s5no83ctv43g6ok2rivcvbii3wuxj9wkw5n2vniu71y3gm5h6ibqfcewoe25q1qds56q7ucbqrf1w86ouoqbzt4fpurjrb39ze0nmnwellqbhwbcvoal41lc48rhib72m3wf4l4kchapvd7tofnwxv4g8r1g7fobtoxj2eyv7yvo6ov8n1lgum6zsb9mx8cz3vvgss6zsten3kpnzxjpy8zowesf58disdetwm2al1u09dhqiucmfiqthz2jxgzg2buhyvh9cp15d14qwr63x2z553mu7gsnf8ndb58ni2axv02wql81x2zyaaysz619mc8ocsxi4mr6r7k26pkxsjmc93hsynk7kvywhitsbxzctxkjne3t3ibv450qxza81ju8q1iionm6sz320fp10skk8j3z2ozx56gqq9doxqn9fq884urhw4lbgm7puwmup562dubu2ihg48ig7k85l3n0ir054phydwx7qkkdgxq2bo9on5ee9rg2hxpnzp7p5dijw0wh4mjg1p8ukmd4nldv347py1of81t2uzpxidkypg3srudpfxw74wxlfi0jagilflyfx25ea1vmm864ht46089ddwn17nv8dav8jbojc4dob1c3xdc19h1ilinlfsxnmnxwxw8j8f6h1twlsphncat35a6b258vikyd0ylfubx11gku305er3rslksdcop0mqsotx1phw30oz31ydxhbztr1v7r9doj2p2y1q1xpqmz1cjpvlxi81bznmi7guxfwaojy11w9crokwnd93of1e86x3xz4pr558ogdsd54fph5c5b5cc7thw3ud16ffqvv0vsjp07qa7e46ym47rhnuuchcpm3j5p3ozvu6nx9r8tvbiudrro2wzqmp0gc663zlmx0pj88wqp58mgqwkzmpf3x3fb81gopygder0r1d1wkcgy1l0tyqdcr244rrf7u8u94wm8gzoomlugi4c2i1kxfmltcbb27ox9sdg57fqgouap4tdjsnr8gta0y6xfl64hfkmx3wbwbkn6dyzog7dfbu5utxly39ajvf2qrcjk8z0of7ds63cr0r3w3vdldezxayg71iwfhy2vpjluya7zpk6y5mvchbxaore5hhomcy0psauj0suj4lpwhq94tjhz1flh1rcloksys13oz58a90ea7zbellx1cbicsq2tpfir3octrbns2t4qvnqo7lqdj7aba7eja5spe6skhhluoiggnt1e4xyg34eeofnaj1wf9b260ivka529r85gt6hc9femg1bkmn2kf9884ur8tdalkpud3p39shz61ldsxc96vc9gc9982knvod8o35fuycej4x9wfjj4yijjp9ddclurfpvh4fld46k8hyf149trb62j03k7a94ivx70q7s2aypaera9iqg9yishgpz32kus2fporo80u3vs0wtta0bz9tmly5218f4bpsts1nz6lxvhexicg4i == \6\h\g\m\q\5\q\l\k\0\a\f\3\z\7\4\j\p\v\t\0\5\7\m\m\j\7\8\u\h\1\s\o\r\n\h\j\3\x\d\5\m\c\n\t\o\e\x\z\h\9\q\h\u\9\s\u\y\t\d\q\p\o\w\q\2\i\t\e\m\g\j\v\l\9\h\y\9\i\t\3\s\t\0\v\o\t\q\9\1\2\u\9\a\t\2\n\6\m\o\8\d\m\v\q\2\u\u\3\k\2\7\6\v\8\e\e\h\c\y\9\e\v\x\7\i\2\q\s\c\7\t\9\r\8\e\v\6\a\r\0\9\p\x\d\9\f\b\7\v\j\a\4\f\v\o\b\s\h\3\d\u\s\4\3\e\m\u\l\s\7\i\p\l\2\4\s\w\c\a\5\e\x\6\a\g\s\a\v\u\n\l\o\v\w\c\h\t\o\7\2\q\x\x\i\9\j\y\y\y\z\m\u\v\v\h\2\u\c\l\n\v\f\d\7\d\i\t\9\4\6\f\j\x\j\1\e\3\w\k\s\n\2\i\5\5\h\7\f\g\3\k\i\d\k\6\a\0\q\t\b\j\s\t\v\f\8\j\x\g\e\2\w\i\a\5\3\f\f\x\q\a\a\j\a\l\9\5\4\5\p\j\r\x\f\r\0\3\k\l\e\7\e\k\e\6\a\9\5\z\3\r\0\f\n\g\r\e\8\0\k\p\u\o\a\x\5\2\0\t\v\l\s\t\z\9\q\7\3\b\f\e\i\y\j\f\z\x\t\6\u\s\q\g\k\0\d\w\w\f\v\e\3\h\v\g\z\d\e\7\p\a\i\q\3\2\t\3\8\b\7\3\0\u\a\4\k\3\f\r\6\9\l\s\5\9\b\p\a\u\3\u\t\q\d\f\t\2\w\3\7\z\i\c\x\s\3\t\n\6\v\p\r\r\4\2\0\r\r\p\6\a\z\k\p\x\e\m\6\v\1\3\l\n\j\g\0\9\v\7\5\r\1\q\z\i\j\z\1\3\p\o\o\h\q\n\w\l\g\0\g\r\y\6\p\q\k\x\u\9\z\6\t\w\d\v\0\j\4\h\e\b\u\g\n\z\v\r\s\9\y\h\x\q\4\3\4\b\q\e\e\h\7\2\u\7\e\e\o\4\c\x\m\o\3\5\e\w\f\t\x\9\0\e\y\c\9\b\0\8\n\h\4\5\u\m\a\4\k\t\n\d\i\q\w\d\o\0\s\4\v\5\d\5\m\w\7\2\8\a\g\f\j\n\c\k\h\j\m\n\0\z\v\i\t\u\z\n\4\3\1\r\6\5\z\f\7\y\y\u\3\x\f\w\w\q\u\f\v\6\2\6\e\i\f\f\5\k\2\g\p\6\e\i\i\2\e\1\w\a\i\q\g\w\6\t\v\q\0\d\l\n\4\n\x\x\z\f\7\8\x\j\9\t\p\e\t\r\2\4\7\8\b\i\n\z\0\p\o\z\p\5\d\1\q\t\x\e\1\p\3\3\a\w\1\o\w\q\u\g\7\k\c\b\k\9\a\9\u\4\0\j\u\a\8\d\q\q\i\w\v\9\n\r\u\p\i\m\y\u\k\0\4\7\x\h\q\2\8\f\5\p\t\b\f\x\u\y\t\6\e\0\n\c\g\c\3\i\l\a\k\b\p\a\n\a\5\6\y\u\w\i\w\w\o\z\2\1\h\q\b\j\m\j\2\e\r\c\x\w\d\l\0\o\g\u\c\g\3\m\r\2\i\k\1\4\4\t\z\a\a\7\y\o\u\p\9\3\1\3\s\b\9\l\9\f\5\4\f\b\s\z\9\9\a\c\h\o\y\m\2\g\t\1\7\r\3\5\q\h\x\7\p\q\x\2\a\a\8\1\r\5\l\l\u\7\g\h\p\v\g\a\r\z\3\5\n\a\j\m\f\k\k\k\3\t\y\d\1\d\w\t\b\f\d\a\x\f\9\z\m\s\8\t\m\o\4\s\b\c\0\3\x\d\w\n\h\r\b\b\i\r\w\x\k\5\s\n\b\g\y\w\w\u\a\d\a\0\p\0\6\p\v\a\8\b\3\8\c\w\2\u\s\f\q\t\q\z\b\s\z\5\p\u\m\l\b\2\w\1\x\f\d\h\s\y\v\l\i\7\o\z\4\s\k\c\o\d\m\v\c\p\l\4\3\v\s\6\z\h\n\z\4\w\c\h\j\v\e\u\t\j\c\k\q\w\c\3\g\p\x\g\l\8\r\f\y\7\1\d\0\j\6\1\7\k\d\h\l\z\9\w\6\n\e\r\8\0\o\5\o\t\d\0\7\m\0\l\5\k\n\4\i\g\9\r\r\1\a\g\4\a\h\a\i\r\q\m\n\y\j\t\o\r\p\x\f\g\f\7\9\7\n\8\e\9\2\p\4\s\u\9\w\9\w\r\m\4\5\f\x\t\y\y\l\6\p\7\m\c\i\c\1\a\i\1\r\7\x\s\w\f\k\m\t\x\e\x\7\p\8\i\5\r\j\y\9\q\w\1\0\d\t\4\3\k\p\m\c\h\u\9\4\q\n\a\e\a\e\5\q\h\n\3\c\j\z\g\f\q\7\r\8\q\x\2\w\e\e\e\v\r\o\3\p\s\4\l\f\u\4\p\q\u\j\1\r\2\n\s\a\3\t\u\l\2\5\u\8\e\p\x\o\l\y\2\y\k\w\4\y\u\1\9\4\6\z\p\c\4\k\j\4\g\5\0\l\k\q\x\s\8\e\l\h\p\8\x\k\q\q\i\y\z\e\0\6\2\m\c\b\e\h\v\3\6\h\6\l\8\u\v\4\j\9\k\f\x\3\x\5\y\s\y\y\j\e\e\8\8\y\c\9\j\5\g\r\x\b\d\p\h\u\m\o\1\3\s\z\l\4\1\c\f\9\f\5\g\x\y\e\q\x\8\s\f\j\7\d\h\b\d\3\5\n\2\x\2\v\z\w\z\s\p\u\u\8\s\5\4\i\d\0\o\p\g\1\9\a\j\b\v\c\h\e\e\f\8\y\j\y\0\j\8\0\m\8\5\5\x\h\s\f\d\1\v\l\k\n\4\r\t\q\d\p\z\u\a\7\t\c\w\z\b\f\n\p\l\9\n\y\q\8\t\g\i\9\w\q\7\4\5\a\p\n\u\j\8\o\9\v\i\j\s\5\2\e\c\t\n\t\v\7\7\2\i\9\p\k\j\0\m\d\y\v\2\e\q\t\x\9\0\r\l\9\e\b\x\i\l\b\v\u\i\n\v\3\0\y\3\9\3\b\y\k\7\1\v\c\p\m\8\6\x\d\c\a\r\9\7\n\r\v\2\c\s\w\w\4\u\c\g\c\r\y\7\5\7\d\x\j\c\r\6\z\w\n\r\w\c\8\c\3\h\z\m\9\8\6\0\h\8\9\8\s\u\0\m\s\l\e\e\y\b\n\3\b\s\9\w\i\a\e\3\8\w\o\m\2\h\z\u\2\1\a\j\2\n\7\0\y\z\2\m\f\0\d\x\p\v\b\p\a\u\8\g\x\v\y\d\2\1\y\z\5\m\3\4\y\8\9\2\w\p\9\w\e\3\q\e\g\n\x\d\4\f\b\z\v\y\e\x\w\t\l\k\a\a\d\o\6\5\f\v\j\m\2\7\j\h\8\x\g\g\p\0\e\2\i\m\y\e\w\5\5\z\b\5\3\j\h\8\r\y\n\5\b\j\p\e\r\m\c\v\2\t\j\e\k\s\b\6\i\0\8\3\n\y\j\v\c\9\8\m\w\7\s\d\h\n\s\g\g\s\z\4\z\o\9\1\0\x\2\j\n\u\e\w\7\d\2\c\e\9\j\k\w\d\f\v\i\8\r\p\q\v\s\9\j\2\3\7\l\u\i\8\9\s\y\m\g\9\o\8\7\5\4\3\n\1\8\r\s\j\a\m\k\t\s\s\7\i\8\l\a\w\r\5\6\y\g\r\o\z\e\r\8\w\6\3\7\h\g\a\q\t\4\k\a\h\b\0\n\k\5\1\d\p\m\h\l\4\2\g\2\5\l\z\u\s\m\x\b\f\p\j\s\3\8\e\a\p\m\c\7\j\w\r\0\c\3\m\b\0\w\0\w\o\x\o\s\x\8\t\4\b\a\j\k\h\2\7\j\d\f\7\e\x\9\s\k\7\7\o\w\w\q\p\d\e\q\c\v\0\t\i\q\n\v\j\h\7\g\4\v\k\b\7\t\8\2\y\6\1\d\6\z\d\w\0\j\n\8\0\v\k\g\0\o\0\4\f\t\3\a\v\l\0\9\x\a\5\x\q\9\w\t\q\q\4\3\l\l\u\2\a\n\5\f\u\y\2\d\n\y\8\4\l\w\6\k\p\v\0\n\w\6\d\j\d\n\5\k\i\p\w\i\f\5\1\w\3\9\g\d\4\0\w\r\e\b\x\o\v\x\r\b\h\e\q\k\k\h\u\r\s\5\2\3\c\l\n\h\c\0\k\2\x\6\m\q\b\2\z\4\i\v\v\2\t\x\f\g\c\5\d\a\4\z\j\a\q\b\n\e\n\2\6\5\g\m\c\c\r\k\u\v\n\q\i\s\w\b\y\a\b\0\3\j\z\4\o\w\u\b\4\e\2\x\4\u\9\u\9\e\0\p\9\1\e\v\u\n\h\t\g\7\m\v\p\w\t\w\7\9\k\h\m\g\x\f\x\k\n\d\k\0\e\n\1\n\e\n\m\k\1\0\s\2\3\c\n\f\i\2\s\m\y\9\x\j\o\v\u\p\o\2\4\y\y\x\q\j\5\3\v\k\m\h\n\a\e\q\1\a\1\x\9\k\e\5\u\t\f\d\0\i\t\r\t\0\v\z\z\r\5\d\s\h\5\m\f\8\k\8\8\j\p\z\e\o\p\c\3\m\o\0\v\y\i\a\1\8\z\2\z\d\l\i\q\x\7\j\i\a\o\x\y\i\5\0\n\q\c\r\0\9\p\p\d\7\7\g\6\0\1\i\w\4\f\4\k\w\i\e\y\s\j\o\a\y\0\b\0\6\s\9\a\1\g\d\c\k\r\1\h\q\l\y\5\2\v\q\5\u\f\5\i\r\9\u\8\f\d\m\v\p\m\g\6\m\i\l\9\2\4\b\h\n\m\i\b\3\9\p\z\z\f\9\o\1\s\p\3\f\v\8\p\8\c\x\8\m\9\e\z\b\c\b\b\s\f\g\9\q\w\m\l\n\1\0\a\q\h\1\w\d\h\z\b\t\c\3\6\u\n\2\w\5\x\d\i\d\w\1\d\t\4\c\u\6\f\g\u\h\m\d\e\j\t\f\f\a\r\c\q\j\x\o\r\8\z\n\9\2\z\j\b\r\u\1\l\1\9\7\n\i\9\k\x\t\5\6\g\r\4\o\f\r\c\3\j\j\2\o\p\b\y\4\x\6\h\w\5\3\n\c\t\m\a\n\d\w\l\v\w\r\l\t\r\y\z\4\o\s\l\s\2\c\e\2\q\z\s\y\f\d\h\n\y\8\8\z\d\g\q\6\l\o\8\d\e\n\9\l\6\o\b\2\8\5\0\7\j\v\b\o\p\3\h\0\q\h\6\v\v\d\x\q\6\3\o\5\z\6\m\k\m\c\h\9\o\y\u\7\o\v\g\v\g\r\6\y\g\f\g\m\g\b\i\5\c\m\u\a\o\f\e\8\m\1\0\v\n\f\9\t\f\x\a\c\t\q\x\x\b\y\3\x\x\0\o\1\1\t\k\3\3\d\v\r\i\q\7\4\s\5\n\o\8\3\c\t\v\4\3\g\6\o\k\2\r\i\v\c\v\b\i\i\3\w\u\x\j\9\w\k\w\5\n\2\v\n\i\u\7\1\y\3\g\m\5\h\6\i\b\q\f\c\e\w\o\e\2\5\q\1\q\d\s\5\6\q\7\u\c\b\q\r\f\1\w\8\6\o\u\o\q\b\z\t\4\f\p\u\r\j\r\b\3\9\z\e\0\n\m\n\w\e\l\l\q\b\h\w\b\c\v\o\a\l\4\1\l\c\4\8\r\h\i\b\7\2\m\3\w\f\4\l\4\k\c\h\a\p\v\d\7\t\o\f\n\w\x\v\4\g\8\r\1\g\7\f\o\b\t\o\x\j\2\e\y\v\7\y\v\o\6\o\v\8\n\1\l\g\u\m\6\z\s\b\9\m\x\8\c\z\3\v\v\g\s\s\6\z\s\t\e\n\3\k\p\n\z\x\j\p\y\8\z\o\w\e\s\f\5\8\d\i\s\d\e\t\w\m\2\a\l\1\u\0\9\d\h\q\i\u\c\m\f\i\q\t\h\z\2\j\x\g\z\g\2\b\u\h\y\v\h\9\c\p\1\5\d\1\4\q\w\r\6\3\x\2\z\5\5\3\m\u\7\g\s\n\f\8\n\d\b\5\8\n\i\2\a\x\v\0\2\w\q\l\8\1\x\2\z\y\a\a\y\s\z\6\1\9\m\c\8\o\c\s\x\i\4\m\r\6\r\7\k\2\6\p\k\x\s\j\m\c\9\3\h\s\y\n\k\7\k\v\y\w\h\i\t\s\b\x\z\c\t\x\k\j\n\e\3\t\3\i\b\v\4\5\0\q\x\z\a\8\1\j\u\8\q\1\i\i\o\n\m\6\s\z\3\2\0\f\p\1\0\s\k\k\8\j\3\z\2\o\z\x\5\6\g\q\q\9\d\o\x\q\n\9\f\q\8\8\4\u\r\h\w\4\l\b\g\m\7\p\u\w\m\u\p\5\6\2\d\u\b\u\2\i\h\g\4\8\i\g\7\k\8\5\l\3\n\0\i\r\0\5\4\p\h\y\d\w\x\7\q\k\k\d\g\x\q\2\b\o\9\o\n\5\e\e\9\r\g\2\h\x\p\n\z\p\7\p\5\d\i\j\w\0\w\h\4\m\j\g\1\p\8\u\k\m\d\4\n\l\d\v\3\4\7\p\y\1\o\f\8\1\t\2\u\z\p\x\i\d\k\y\p\g\3\s\r\u\d\p\f\x\w\7\4\w\x\l\f\i\0\j\a\g\i\l\f\l\y\f\x\2\5\e\a\1\v\m\m\8\6\4\h\t\4\6\0\8\9\d\d\w\n\1\7\n\v\8\d\a\v\8\j\b\o\j\c\4\d\o\b\1\c\3\x\d\c\1\9\h\1\i\l\i\n\l\f\s\x\n\m\n\x\w\x\w\8\j\8\f\6\h\1\t\w\l\s\p\h\n\c\a\t\3\5\a\6\b\2\5\8\v\i\k\y\d\0\y\l\f\u\b\x\1\1\g\k\u\3\0\5\e\r\3\r\s\l\k\s\d\c\o\p\0\m\q\s\o\t\x\1\p\h\w\3\0\o\z\3\1\y\d\x\h\b\z\t\r\1\v\7\r\9\d\o\j\2\p\2\y\1\q\1\x\p\q\m\z\1\c\j\p\v\l\x\i\8\1\b\z\n\m\i\7\g\u\x\f\w\a\o\j\y\1\1\w\9\c\r\o\k\w\n\d\9\3\o\f\1\e\8\6\x\3\x\z\4\p\r\5\5\8\o\g\d\s\d\5\4\f\p\h\5\c\5\b\5\c\c\7\t\h\w\3\u\d\1\6\f\f\q\v\v\0\v\s\j\p\0\7\q\a\7\e\4\6\y\m\4\7\r\h\n\u\u\c\h\c\p\m\3\j\5\p\3\o\z\v\u\6\n\x\9\r\8\t\v\b\i\u\d\r\r\o\2\w\z\q\m\p\0\g\c\6\6\3\z\l\m\x\0\p\j\8\8\w\q\p\5\8\m\g\q\w\k\z\m\p\f\3\x\3\f\b\8\1\g\o\p\y\g\d\e\r\0\r\1\d\1\w\k\c\g\y\1\l\0\t\y\q\d\c\r\2\4\4\r\r\f\7\u\8\u\9\4\w\m\8\g\z\o\o\m\l\u\g\i\4\c\2\i\1\k\x\f\m\l\t\c\b\b\2\7\o\x\9\s\d\g\5\7\f\q\g\o\u\a\p\4\t\d\j\s\n\r\8\g\t\a\0\y\6\x\f\l\6\4\h\f\k\m\x\3\w\b\w\b\k\n\6\d\y\z\o\g\7\d\f\b\u\5\u\t\x\l\y\3\9\a\j\v\f\2\q\r\c\j\k\8\z\0\o\f\7\d\s\6\3\c\r\0\r\3\w\3\v\d\l\d\e\z\x\a\y\g\7\1\i\w\f\h\y\2\v\p\j\l\u\y\a\7\z\p\k\6\y\5\m\v\c\h\b\x\a\o\r\e\5\h\h\o\m\c\y\0\p\s\a\u\j\0\s\u\j\4\l\p\w\h\q\9\4\t\j\h\z\1\f\l\h\1\r\c\l\o\k\s\y\s\1\3\o\z\5\8\a\9\0\e\a\7\z\b\e\l\l\x\1\c\b\i\c\s\q\2\t\p\f\i\r\3\o\c\t\r\b\n\s\2\t\4\q\v\n\q\o\7\l\q\d\j\7\a\b\a\7\e\j\a\5\s\p\e\6\s\k\h\h\l\u\o\i\g\g\n\t\1\e\4\x\y\g\3\4\e\e\o\f\n\a\j\1\w\f\9\b\2\6\0\i\v\k\a\5\2\9\r\8\5\g\t\6\h\c\9\f\e\m\g\1\b\k\m\n\2\k\f\9\8\8\4\u\r\8\t\d\a\l\k\p\u\d\3\p\3\9\s\h\z\6\1\l\d\s\x\c\9\6\v\c\9\g\c\9\9\8\2\k\n\v\o\d\8\o\3\5\f\u\y\c\e\j\4\x\9\w\f\j\j\4\y\i\j\j\p\9\d\d\c\l\u\r\f\p\v\h\4\f\l\d\4\6\k\8\h\y\f\1\4\9\t\r\b\6\2\j\0\3\k\7\a\9\4\i\v\x\7\0\q\7\s\2\a\y\p\a\e\r\a\9\i\q\g\9\y\i\s\h\g\p\z\3\2\k\u\s\2\f\p\o\r\o\8\0\u\3\v\s\0\w\t\t\a\0\b\z\9\t\m\l\y\5\2\1\8\f\4\b\p\s\t\s\1\n\z\6\l\x\v\h\e\x\i\c\g\4\i ]] 00:07:24.167 ************************************ 00:07:24.167 END TEST dd_rw_offset 00:07:24.167 ************************************ 00:07:24.167 00:07:24.167 real 0m3.154s 00:07:24.167 user 0m2.646s 00:07:24.167 sys 0m1.650s 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.167 05:25:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.167 { 00:07:24.167 "subsystems": [ 00:07:24.167 { 00:07:24.167 "subsystem": "bdev", 00:07:24.167 "config": [ 00:07:24.167 { 00:07:24.167 "params": { 00:07:24.167 "trtype": "pcie", 00:07:24.167 "traddr": "0000:00:10.0", 00:07:24.167 "name": "Nvme0" 00:07:24.167 }, 00:07:24.167 "method": "bdev_nvme_attach_controller" 00:07:24.167 }, 00:07:24.167 { 00:07:24.167 "method": "bdev_wait_for_examine" 00:07:24.167 } 00:07:24.167 ] 00:07:24.167 } 00:07:24.167 ] 00:07:24.167 } 00:07:24.167 [2024-12-16 05:25:04.258560] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:24.167 [2024-12-16 05:25:04.258762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63362 ] 00:07:24.426 [2024-12-16 05:25:04.438383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.426 [2024-12-16 05:25:04.528697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.684 [2024-12-16 05:25:04.688791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.684  [2024-12-16T05:25:05.879Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:25.620 00:07:25.620 05:25:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.620 ************************************ 00:07:25.620 END TEST spdk_dd_basic_rw 00:07:25.620 ************************************ 00:07:25.620 00:07:25.620 real 0m38.822s 00:07:25.620 user 0m32.391s 00:07:25.620 sys 0m17.911s 00:07:25.620 05:25:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.620 05:25:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.620 05:25:05 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:25.620 05:25:05 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.620 05:25:05 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.620 05:25:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:25.620 ************************************ 00:07:25.620 START TEST spdk_dd_posix 00:07:25.620 ************************************ 00:07:25.620 05:25:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:25.879 * Looking for test storage... 00:07:25.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.879 05:25:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:25.879 05:25:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:07:25.879 05:25:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.879 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:25.880 * First test run, liburing in use 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 ************************************ 00:07:25.880 START TEST dd_flag_append 00:07:25.880 ************************************ 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=jsryikrczb0ut1apf1vr8wz2ohsupxjf 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=kaqixt3z0379l4ryscp3ypg3y99xerj8 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s jsryikrczb0ut1apf1vr8wz2ohsupxjf 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s kaqixt3z0379l4ryscp3ypg3y99xerj8 00:07:25.880 05:25:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:26.139 [2024-12-16 05:25:06.214153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:26.139 [2024-12-16 05:25:06.214389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63441 ] 00:07:26.404 [2024-12-16 05:25:06.398245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.404 [2024-12-16 05:25:06.501588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.663 [2024-12-16 05:25:06.678434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.663  [2024-12-16T05:25:07.858Z] Copying: 32/32 [B] (average 31 kBps) 00:07:27.599 00:07:27.599 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ kaqixt3z0379l4ryscp3ypg3y99xerj8jsryikrczb0ut1apf1vr8wz2ohsupxjf == \k\a\q\i\x\t\3\z\0\3\7\9\l\4\r\y\s\c\p\3\y\p\g\3\y\9\9\x\e\r\j\8\j\s\r\y\i\k\r\c\z\b\0\u\t\1\a\p\f\1\v\r\8\w\z\2\o\h\s\u\p\x\j\f ]] 00:07:27.599 00:07:27.599 real 0m1.701s 00:07:27.599 user 0m1.372s 00:07:27.599 sys 0m0.925s 00:07:27.600 ************************************ 00:07:27.600 END TEST dd_flag_append 00:07:27.600 ************************************ 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:27.600 ************************************ 00:07:27.600 START TEST dd_flag_directory 00:07:27.600 ************************************ 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.600 05:25:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:27.859 [2024-12-16 05:25:07.935536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:27.859 [2024-12-16 05:25:07.935780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63487 ] 00:07:28.118 [2024-12-16 05:25:08.117472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.118 [2024-12-16 05:25:08.209596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.377 [2024-12-16 05:25:08.389479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.377 [2024-12-16 05:25:08.477740] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.377 [2024-12-16 05:25:08.477826] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.377 [2024-12-16 05:25:08.477847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.944 [2024-12-16 05:25:09.102513] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:29.203 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.204 05:25:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.463 [2024-12-16 05:25:09.480342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:29.463 [2024-12-16 05:25:09.480529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63514 ] 00:07:29.463 [2024-12-16 05:25:09.661390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.722 [2024-12-16 05:25:09.748994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.722 [2024-12-16 05:25:09.902802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.981 [2024-12-16 05:25:09.994200] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.981 [2024-12-16 05:25:09.994294] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.981 [2024-12-16 05:25:09.994348] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.549 [2024-12-16 05:25:10.645187] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.808 00:07:30.808 real 0m3.048s 00:07:30.808 user 0m2.425s 00:07:30.808 sys 0m0.402s 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.808 ************************************ 00:07:30.808 END TEST dd_flag_directory 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:30.808 ************************************ 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:30.808 ************************************ 00:07:30.808 START TEST dd_flag_nofollow 00:07:30.808 ************************************ 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.808 05:25:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.808 [2024-12-16 05:25:11.032688] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:30.808 [2024-12-16 05:25:11.032861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63549 ] 00:07:31.067 [2024-12-16 05:25:11.210790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.067 [2024-12-16 05:25:11.296961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.325 [2024-12-16 05:25:11.460388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.325 [2024-12-16 05:25:11.553785] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:31.325 [2024-12-16 05:25:11.553856] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:31.325 [2024-12-16 05:25:11.553893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.935 [2024-12-16 05:25:12.120393] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.193 05:25:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:33.406 [2024-12-16 05:25:12.465469] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:33.406 [2024-12-16 05:25:12.465671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63565 ] 00:07:33.406 [2024-12-16 05:25:12.637174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.406 [2024-12-16 05:25:12.718395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.406 [2024-12-16 05:25:12.859690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.406 [2024-12-16 05:25:12.942068] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:33.406 [2024-12-16 05:25:12.942163] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:33.406 [2024-12-16 05:25:12.942185] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.406 [2024-12-16 05:25:13.523068] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:33.665 05:25:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.665 [2024-12-16 05:25:13.886176] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:33.665 [2024-12-16 05:25:13.886371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63590 ] 00:07:33.924 [2024-12-16 05:25:14.063658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.924 [2024-12-16 05:25:14.143666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.182 [2024-12-16 05:25:14.288484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.183  [2024-12-16T05:25:15.388Z] Copying: 512/512 [B] (average 500 kBps) 00:07:35.129 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 4aavb0w9uo1hvbama4df6w968g5jiw69c2cpd18598f3bfhcl8dlqdmnrsu6zo2wm9uzj8lhxeawekj4z3ma6uvyzq1welgelcp6b7sguqrb1jnw97h32k0mu2glj4gbbos7p2ckzl6sx039nzg8irbw33evyiufxjxqkgcdoqw7lxhb79c853r02jn5pwa70h96uc4nmtzlscyh71kmpw76ljc47c7k87i770qzhvum2u7cbup6z05b4x647yslm59skxgb3tesn5wdbbk875xzo1on6n53ql6e5kquefkhta1089wdzl5hl0z76b83x58x2yd1o7mo5cfsdl0w3me5dmzpjcrf3gix2ejhdnni84f12dzteupfvjqmtj4j435pauy0i5gktr54jp96t2juys1kunznkzx18bp15c2h2oo7mk6z1ve6bv8gag325wno6s0yk5hao1l1irx394e8gbswc3kadzjd37leo9vwauk7sn982s2d4kcdo98m == \4\a\a\v\b\0\w\9\u\o\1\h\v\b\a\m\a\4\d\f\6\w\9\6\8\g\5\j\i\w\6\9\c\2\c\p\d\1\8\5\9\8\f\3\b\f\h\c\l\8\d\l\q\d\m\n\r\s\u\6\z\o\2\w\m\9\u\z\j\8\l\h\x\e\a\w\e\k\j\4\z\3\m\a\6\u\v\y\z\q\1\w\e\l\g\e\l\c\p\6\b\7\s\g\u\q\r\b\1\j\n\w\9\7\h\3\2\k\0\m\u\2\g\l\j\4\g\b\b\o\s\7\p\2\c\k\z\l\6\s\x\0\3\9\n\z\g\8\i\r\b\w\3\3\e\v\y\i\u\f\x\j\x\q\k\g\c\d\o\q\w\7\l\x\h\b\7\9\c\8\5\3\r\0\2\j\n\5\p\w\a\7\0\h\9\6\u\c\4\n\m\t\z\l\s\c\y\h\7\1\k\m\p\w\7\6\l\j\c\4\7\c\7\k\8\7\i\7\7\0\q\z\h\v\u\m\2\u\7\c\b\u\p\6\z\0\5\b\4\x\6\4\7\y\s\l\m\5\9\s\k\x\g\b\3\t\e\s\n\5\w\d\b\b\k\8\7\5\x\z\o\1\o\n\6\n\5\3\q\l\6\e\5\k\q\u\e\f\k\h\t\a\1\0\8\9\w\d\z\l\5\h\l\0\z\7\6\b\8\3\x\5\8\x\2\y\d\1\o\7\m\o\5\c\f\s\d\l\0\w\3\m\e\5\d\m\z\p\j\c\r\f\3\g\i\x\2\e\j\h\d\n\n\i\8\4\f\1\2\d\z\t\e\u\p\f\v\j\q\m\t\j\4\j\4\3\5\p\a\u\y\0\i\5\g\k\t\r\5\4\j\p\9\6\t\2\j\u\y\s\1\k\u\n\z\n\k\z\x\1\8\b\p\1\5\c\2\h\2\o\o\7\m\k\6\z\1\v\e\6\b\v\8\g\a\g\3\2\5\w\n\o\6\s\0\y\k\5\h\a\o\1\l\1\i\r\x\3\9\4\e\8\g\b\s\w\c\3\k\a\d\z\j\d\3\7\l\e\o\9\v\w\a\u\k\7\s\n\9\8\2\s\2\d\4\k\c\d\o\9\8\m ]] 00:07:35.130 00:07:35.130 real 0m4.315s 00:07:35.130 user 0m3.419s 00:07:35.130 sys 0m1.195s 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:35.130 ************************************ 00:07:35.130 END TEST dd_flag_nofollow 00:07:35.130 ************************************ 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:35.130 ************************************ 00:07:35.130 START TEST dd_flag_noatime 00:07:35.130 ************************************ 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1734326714 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1734326715 00:07:35.130 05:25:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:36.065 05:25:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.324 [2024-12-16 05:25:16.417878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:36.324 [2024-12-16 05:25:16.418112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63639 ] 00:07:36.583 [2024-12-16 05:25:16.605020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.583 [2024-12-16 05:25:16.734112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.842 [2024-12-16 05:25:16.912070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.842  [2024-12-16T05:25:18.035Z] Copying: 512/512 [B] (average 500 kBps) 00:07:37.776 00:07:37.776 05:25:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.776 05:25:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1734326714 )) 00:07:37.776 05:25:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.776 05:25:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1734326715 )) 00:07:37.777 05:25:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.777 [2024-12-16 05:25:18.013177] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:37.777 [2024-12-16 05:25:18.013371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63670 ] 00:07:38.035 [2024-12-16 05:25:18.192552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.035 [2024-12-16 05:25:18.272509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.294 [2024-12-16 05:25:18.417783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.294  [2024-12-16T05:25:19.501Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.242 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1734326718 )) 00:07:39.242 00:07:39.242 real 0m4.077s 00:07:39.242 user 0m2.418s 00:07:39.242 sys 0m1.715s 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:39.242 ************************************ 00:07:39.242 END TEST dd_flag_noatime 00:07:39.242 ************************************ 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:39.242 ************************************ 00:07:39.242 START TEST dd_flags_misc 00:07:39.242 ************************************ 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:39.242 05:25:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:39.516 [2024-12-16 05:25:19.533565] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:39.516 [2024-12-16 05:25:19.533756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63705 ] 00:07:39.516 [2024-12-16 05:25:19.714454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.776 [2024-12-16 05:25:19.835643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.776 [2024-12-16 05:25:20.004762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.034  [2024-12-16T05:25:21.228Z] Copying: 512/512 [B] (average 500 kBps) 00:07:40.969 00:07:40.969 05:25:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t3kl8du6ut2ov1mlus42cuhutw6muhorm7967r98gctv5p8pfbdek4v2h8qfoyf6mr82q1lye1xf9vzele8f9hf6j97l8hvs220bh8podnn9ofr0mdafb88dv3d09uael07qisahm2pcjd561ch1alp8c0z2284qi9vaca7yrf3hk2l3ytlrue5dwa1w8a9waegyk5imevq5npxirgk1xzwfzmthnxx08wfya6cge9uqkgxda36kbx220t4epv0292s8d23nqlk3har57wscvq5hs0273a8jshrw92bbbhjws2gv2a0errp6ggx97uqltd5i4vkw9srvh3qipb2v8wjmamq4c722zlizw8bm61mc0kyfash8zvg8pk7v4nfgj5pj5al881uqnrwshnicn67g68ht6udwnwt9kl7o90pr87r60i83vj3vacelxmdd7p91f6rlrf59xn8vkoczs29qdgg8ni35jbe4hghscyfvfhs1sey6lqlm6l0udgwo == \t\3\k\l\8\d\u\6\u\t\2\o\v\1\m\l\u\s\4\2\c\u\h\u\t\w\6\m\u\h\o\r\m\7\9\6\7\r\9\8\g\c\t\v\5\p\8\p\f\b\d\e\k\4\v\2\h\8\q\f\o\y\f\6\m\r\8\2\q\1\l\y\e\1\x\f\9\v\z\e\l\e\8\f\9\h\f\6\j\9\7\l\8\h\v\s\2\2\0\b\h\8\p\o\d\n\n\9\o\f\r\0\m\d\a\f\b\8\8\d\v\3\d\0\9\u\a\e\l\0\7\q\i\s\a\h\m\2\p\c\j\d\5\6\1\c\h\1\a\l\p\8\c\0\z\2\2\8\4\q\i\9\v\a\c\a\7\y\r\f\3\h\k\2\l\3\y\t\l\r\u\e\5\d\w\a\1\w\8\a\9\w\a\e\g\y\k\5\i\m\e\v\q\5\n\p\x\i\r\g\k\1\x\z\w\f\z\m\t\h\n\x\x\0\8\w\f\y\a\6\c\g\e\9\u\q\k\g\x\d\a\3\6\k\b\x\2\2\0\t\4\e\p\v\0\2\9\2\s\8\d\2\3\n\q\l\k\3\h\a\r\5\7\w\s\c\v\q\5\h\s\0\2\7\3\a\8\j\s\h\r\w\9\2\b\b\b\h\j\w\s\2\g\v\2\a\0\e\r\r\p\6\g\g\x\9\7\u\q\l\t\d\5\i\4\v\k\w\9\s\r\v\h\3\q\i\p\b\2\v\8\w\j\m\a\m\q\4\c\7\2\2\z\l\i\z\w\8\b\m\6\1\m\c\0\k\y\f\a\s\h\8\z\v\g\8\p\k\7\v\4\n\f\g\j\5\p\j\5\a\l\8\8\1\u\q\n\r\w\s\h\n\i\c\n\6\7\g\6\8\h\t\6\u\d\w\n\w\t\9\k\l\7\o\9\0\p\r\8\7\r\6\0\i\8\3\v\j\3\v\a\c\e\l\x\m\d\d\7\p\9\1\f\6\r\l\r\f\5\9\x\n\8\v\k\o\c\z\s\2\9\q\d\g\g\8\n\i\3\5\j\b\e\4\h\g\h\s\c\y\f\v\f\h\s\1\s\e\y\6\l\q\l\m\6\l\0\u\d\g\w\o ]] 00:07:40.969 05:25:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:40.969 05:25:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:40.969 [2024-12-16 05:25:21.044794] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:40.969 [2024-12-16 05:25:21.044990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63727 ] 00:07:40.969 [2024-12-16 05:25:21.217273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.228 [2024-12-16 05:25:21.302046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.228 [2024-12-16 05:25:21.446208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.486  [2024-12-16T05:25:22.681Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.422 00:07:42.422 05:25:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t3kl8du6ut2ov1mlus42cuhutw6muhorm7967r98gctv5p8pfbdek4v2h8qfoyf6mr82q1lye1xf9vzele8f9hf6j97l8hvs220bh8podnn9ofr0mdafb88dv3d09uael07qisahm2pcjd561ch1alp8c0z2284qi9vaca7yrf3hk2l3ytlrue5dwa1w8a9waegyk5imevq5npxirgk1xzwfzmthnxx08wfya6cge9uqkgxda36kbx220t4epv0292s8d23nqlk3har57wscvq5hs0273a8jshrw92bbbhjws2gv2a0errp6ggx97uqltd5i4vkw9srvh3qipb2v8wjmamq4c722zlizw8bm61mc0kyfash8zvg8pk7v4nfgj5pj5al881uqnrwshnicn67g68ht6udwnwt9kl7o90pr87r60i83vj3vacelxmdd7p91f6rlrf59xn8vkoczs29qdgg8ni35jbe4hghscyfvfhs1sey6lqlm6l0udgwo == \t\3\k\l\8\d\u\6\u\t\2\o\v\1\m\l\u\s\4\2\c\u\h\u\t\w\6\m\u\h\o\r\m\7\9\6\7\r\9\8\g\c\t\v\5\p\8\p\f\b\d\e\k\4\v\2\h\8\q\f\o\y\f\6\m\r\8\2\q\1\l\y\e\1\x\f\9\v\z\e\l\e\8\f\9\h\f\6\j\9\7\l\8\h\v\s\2\2\0\b\h\8\p\o\d\n\n\9\o\f\r\0\m\d\a\f\b\8\8\d\v\3\d\0\9\u\a\e\l\0\7\q\i\s\a\h\m\2\p\c\j\d\5\6\1\c\h\1\a\l\p\8\c\0\z\2\2\8\4\q\i\9\v\a\c\a\7\y\r\f\3\h\k\2\l\3\y\t\l\r\u\e\5\d\w\a\1\w\8\a\9\w\a\e\g\y\k\5\i\m\e\v\q\5\n\p\x\i\r\g\k\1\x\z\w\f\z\m\t\h\n\x\x\0\8\w\f\y\a\6\c\g\e\9\u\q\k\g\x\d\a\3\6\k\b\x\2\2\0\t\4\e\p\v\0\2\9\2\s\8\d\2\3\n\q\l\k\3\h\a\r\5\7\w\s\c\v\q\5\h\s\0\2\7\3\a\8\j\s\h\r\w\9\2\b\b\b\h\j\w\s\2\g\v\2\a\0\e\r\r\p\6\g\g\x\9\7\u\q\l\t\d\5\i\4\v\k\w\9\s\r\v\h\3\q\i\p\b\2\v\8\w\j\m\a\m\q\4\c\7\2\2\z\l\i\z\w\8\b\m\6\1\m\c\0\k\y\f\a\s\h\8\z\v\g\8\p\k\7\v\4\n\f\g\j\5\p\j\5\a\l\8\8\1\u\q\n\r\w\s\h\n\i\c\n\6\7\g\6\8\h\t\6\u\d\w\n\w\t\9\k\l\7\o\9\0\p\r\8\7\r\6\0\i\8\3\v\j\3\v\a\c\e\l\x\m\d\d\7\p\9\1\f\6\r\l\r\f\5\9\x\n\8\v\k\o\c\z\s\2\9\q\d\g\g\8\n\i\3\5\j\b\e\4\h\g\h\s\c\y\f\v\f\h\s\1\s\e\y\6\l\q\l\m\6\l\0\u\d\g\w\o ]] 00:07:42.422 05:25:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:42.422 05:25:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:42.422 [2024-12-16 05:25:22.518831] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:42.422 [2024-12-16 05:25:22.519054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63748 ] 00:07:42.680 [2024-12-16 05:25:22.700212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.680 [2024-12-16 05:25:22.795756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.938 [2024-12-16 05:25:22.953028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.938  [2024-12-16T05:25:24.132Z] Copying: 512/512 [B] (average 166 kBps) 00:07:43.873 00:07:43.873 05:25:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t3kl8du6ut2ov1mlus42cuhutw6muhorm7967r98gctv5p8pfbdek4v2h8qfoyf6mr82q1lye1xf9vzele8f9hf6j97l8hvs220bh8podnn9ofr0mdafb88dv3d09uael07qisahm2pcjd561ch1alp8c0z2284qi9vaca7yrf3hk2l3ytlrue5dwa1w8a9waegyk5imevq5npxirgk1xzwfzmthnxx08wfya6cge9uqkgxda36kbx220t4epv0292s8d23nqlk3har57wscvq5hs0273a8jshrw92bbbhjws2gv2a0errp6ggx97uqltd5i4vkw9srvh3qipb2v8wjmamq4c722zlizw8bm61mc0kyfash8zvg8pk7v4nfgj5pj5al881uqnrwshnicn67g68ht6udwnwt9kl7o90pr87r60i83vj3vacelxmdd7p91f6rlrf59xn8vkoczs29qdgg8ni35jbe4hghscyfvfhs1sey6lqlm6l0udgwo == \t\3\k\l\8\d\u\6\u\t\2\o\v\1\m\l\u\s\4\2\c\u\h\u\t\w\6\m\u\h\o\r\m\7\9\6\7\r\9\8\g\c\t\v\5\p\8\p\f\b\d\e\k\4\v\2\h\8\q\f\o\y\f\6\m\r\8\2\q\1\l\y\e\1\x\f\9\v\z\e\l\e\8\f\9\h\f\6\j\9\7\l\8\h\v\s\2\2\0\b\h\8\p\o\d\n\n\9\o\f\r\0\m\d\a\f\b\8\8\d\v\3\d\0\9\u\a\e\l\0\7\q\i\s\a\h\m\2\p\c\j\d\5\6\1\c\h\1\a\l\p\8\c\0\z\2\2\8\4\q\i\9\v\a\c\a\7\y\r\f\3\h\k\2\l\3\y\t\l\r\u\e\5\d\w\a\1\w\8\a\9\w\a\e\g\y\k\5\i\m\e\v\q\5\n\p\x\i\r\g\k\1\x\z\w\f\z\m\t\h\n\x\x\0\8\w\f\y\a\6\c\g\e\9\u\q\k\g\x\d\a\3\6\k\b\x\2\2\0\t\4\e\p\v\0\2\9\2\s\8\d\2\3\n\q\l\k\3\h\a\r\5\7\w\s\c\v\q\5\h\s\0\2\7\3\a\8\j\s\h\r\w\9\2\b\b\b\h\j\w\s\2\g\v\2\a\0\e\r\r\p\6\g\g\x\9\7\u\q\l\t\d\5\i\4\v\k\w\9\s\r\v\h\3\q\i\p\b\2\v\8\w\j\m\a\m\q\4\c\7\2\2\z\l\i\z\w\8\b\m\6\1\m\c\0\k\y\f\a\s\h\8\z\v\g\8\p\k\7\v\4\n\f\g\j\5\p\j\5\a\l\8\8\1\u\q\n\r\w\s\h\n\i\c\n\6\7\g\6\8\h\t\6\u\d\w\n\w\t\9\k\l\7\o\9\0\p\r\8\7\r\6\0\i\8\3\v\j\3\v\a\c\e\l\x\m\d\d\7\p\9\1\f\6\r\l\r\f\5\9\x\n\8\v\k\o\c\z\s\2\9\q\d\g\g\8\n\i\3\5\j\b\e\4\h\g\h\s\c\y\f\v\f\h\s\1\s\e\y\6\l\q\l\m\6\l\0\u\d\g\w\o ]] 00:07:43.873 05:25:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.873 05:25:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:43.873 [2024-12-16 05:25:24.029304] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:43.873 [2024-12-16 05:25:24.029496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63770 ] 00:07:44.132 [2024-12-16 05:25:24.207407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.132 [2024-12-16 05:25:24.298966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.390 [2024-12-16 05:25:24.451535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.390  [2024-12-16T05:25:25.584Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.325 00:07:45.325 05:25:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t3kl8du6ut2ov1mlus42cuhutw6muhorm7967r98gctv5p8pfbdek4v2h8qfoyf6mr82q1lye1xf9vzele8f9hf6j97l8hvs220bh8podnn9ofr0mdafb88dv3d09uael07qisahm2pcjd561ch1alp8c0z2284qi9vaca7yrf3hk2l3ytlrue5dwa1w8a9waegyk5imevq5npxirgk1xzwfzmthnxx08wfya6cge9uqkgxda36kbx220t4epv0292s8d23nqlk3har57wscvq5hs0273a8jshrw92bbbhjws2gv2a0errp6ggx97uqltd5i4vkw9srvh3qipb2v8wjmamq4c722zlizw8bm61mc0kyfash8zvg8pk7v4nfgj5pj5al881uqnrwshnicn67g68ht6udwnwt9kl7o90pr87r60i83vj3vacelxmdd7p91f6rlrf59xn8vkoczs29qdgg8ni35jbe4hghscyfvfhs1sey6lqlm6l0udgwo == \t\3\k\l\8\d\u\6\u\t\2\o\v\1\m\l\u\s\4\2\c\u\h\u\t\w\6\m\u\h\o\r\m\7\9\6\7\r\9\8\g\c\t\v\5\p\8\p\f\b\d\e\k\4\v\2\h\8\q\f\o\y\f\6\m\r\8\2\q\1\l\y\e\1\x\f\9\v\z\e\l\e\8\f\9\h\f\6\j\9\7\l\8\h\v\s\2\2\0\b\h\8\p\o\d\n\n\9\o\f\r\0\m\d\a\f\b\8\8\d\v\3\d\0\9\u\a\e\l\0\7\q\i\s\a\h\m\2\p\c\j\d\5\6\1\c\h\1\a\l\p\8\c\0\z\2\2\8\4\q\i\9\v\a\c\a\7\y\r\f\3\h\k\2\l\3\y\t\l\r\u\e\5\d\w\a\1\w\8\a\9\w\a\e\g\y\k\5\i\m\e\v\q\5\n\p\x\i\r\g\k\1\x\z\w\f\z\m\t\h\n\x\x\0\8\w\f\y\a\6\c\g\e\9\u\q\k\g\x\d\a\3\6\k\b\x\2\2\0\t\4\e\p\v\0\2\9\2\s\8\d\2\3\n\q\l\k\3\h\a\r\5\7\w\s\c\v\q\5\h\s\0\2\7\3\a\8\j\s\h\r\w\9\2\b\b\b\h\j\w\s\2\g\v\2\a\0\e\r\r\p\6\g\g\x\9\7\u\q\l\t\d\5\i\4\v\k\w\9\s\r\v\h\3\q\i\p\b\2\v\8\w\j\m\a\m\q\4\c\7\2\2\z\l\i\z\w\8\b\m\6\1\m\c\0\k\y\f\a\s\h\8\z\v\g\8\p\k\7\v\4\n\f\g\j\5\p\j\5\a\l\8\8\1\u\q\n\r\w\s\h\n\i\c\n\6\7\g\6\8\h\t\6\u\d\w\n\w\t\9\k\l\7\o\9\0\p\r\8\7\r\6\0\i\8\3\v\j\3\v\a\c\e\l\x\m\d\d\7\p\9\1\f\6\r\l\r\f\5\9\x\n\8\v\k\o\c\z\s\2\9\q\d\g\g\8\n\i\3\5\j\b\e\4\h\g\h\s\c\y\f\v\f\h\s\1\s\e\y\6\l\q\l\m\6\l\0\u\d\g\w\o ]] 00:07:45.325 05:25:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:45.325 05:25:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:45.325 05:25:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:45.325 05:25:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:45.325 05:25:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.325 05:25:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:45.325 [2024-12-16 05:25:25.530208] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:45.325 [2024-12-16 05:25:25.530413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63791 ] 00:07:45.584 [2024-12-16 05:25:25.711878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.584 [2024-12-16 05:25:25.797976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.843 [2024-12-16 05:25:25.951041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.843  [2024-12-16T05:25:27.037Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.778 00:07:46.778 05:25:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xn1elov5cubr794fl22u6ure0e09kmn9vehevua70imppmzn1ipltwvfkb0lylj2nbv7pud03ylel9wyvizbk136cd9iu3gz35cwqx6lrxvbhsm82iojs1ji8iul6dmizitttq77jkg3xnh33iplrzqcrii38zoxtt08ge3tfc4srofi22o1k3i6bez286199lgrz3fm00pdqwblnu5x59bm8g35bgxo3vbjblpkj9yl8evafa8sij6bezwnfeg279oegyqjbfc9dsn13b8s5sebvb8nu013epfa5py8xxr3vicx28u111e4dcctgmh9b81hpfn23ncplbksazpj9dir8xdnfijzuxqeoreun4t1jyaly165qp4uquamrmj17y00itoyg4vvhrj8jyf7jklwtalopckou4wqlc4fzz9gxzjlxq5vw9hoo5tkf9e1weqskxzxhnlzprf6cfb1o6hwa0pjsc77aqrxhofrgrmbbzjc7xoj07yhhi47a086 == \x\n\1\e\l\o\v\5\c\u\b\r\7\9\4\f\l\2\2\u\6\u\r\e\0\e\0\9\k\m\n\9\v\e\h\e\v\u\a\7\0\i\m\p\p\m\z\n\1\i\p\l\t\w\v\f\k\b\0\l\y\l\j\2\n\b\v\7\p\u\d\0\3\y\l\e\l\9\w\y\v\i\z\b\k\1\3\6\c\d\9\i\u\3\g\z\3\5\c\w\q\x\6\l\r\x\v\b\h\s\m\8\2\i\o\j\s\1\j\i\8\i\u\l\6\d\m\i\z\i\t\t\t\q\7\7\j\k\g\3\x\n\h\3\3\i\p\l\r\z\q\c\r\i\i\3\8\z\o\x\t\t\0\8\g\e\3\t\f\c\4\s\r\o\f\i\2\2\o\1\k\3\i\6\b\e\z\2\8\6\1\9\9\l\g\r\z\3\f\m\0\0\p\d\q\w\b\l\n\u\5\x\5\9\b\m\8\g\3\5\b\g\x\o\3\v\b\j\b\l\p\k\j\9\y\l\8\e\v\a\f\a\8\s\i\j\6\b\e\z\w\n\f\e\g\2\7\9\o\e\g\y\q\j\b\f\c\9\d\s\n\1\3\b\8\s\5\s\e\b\v\b\8\n\u\0\1\3\e\p\f\a\5\p\y\8\x\x\r\3\v\i\c\x\2\8\u\1\1\1\e\4\d\c\c\t\g\m\h\9\b\8\1\h\p\f\n\2\3\n\c\p\l\b\k\s\a\z\p\j\9\d\i\r\8\x\d\n\f\i\j\z\u\x\q\e\o\r\e\u\n\4\t\1\j\y\a\l\y\1\6\5\q\p\4\u\q\u\a\m\r\m\j\1\7\y\0\0\i\t\o\y\g\4\v\v\h\r\j\8\j\y\f\7\j\k\l\w\t\a\l\o\p\c\k\o\u\4\w\q\l\c\4\f\z\z\9\g\x\z\j\l\x\q\5\v\w\9\h\o\o\5\t\k\f\9\e\1\w\e\q\s\k\x\z\x\h\n\l\z\p\r\f\6\c\f\b\1\o\6\h\w\a\0\p\j\s\c\7\7\a\q\r\x\h\o\f\r\g\r\m\b\b\z\j\c\7\x\o\j\0\7\y\h\h\i\4\7\a\0\8\6 ]] 00:07:46.778 05:25:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.778 05:25:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:47.036 [2024-12-16 05:25:27.045350] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:47.036 [2024-12-16 05:25:27.045554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63813 ] 00:07:47.036 [2024-12-16 05:25:27.222123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.295 [2024-12-16 05:25:27.305990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.295 [2024-12-16 05:25:27.466738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.553  [2024-12-16T05:25:28.747Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.488 00:07:48.488 05:25:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xn1elov5cubr794fl22u6ure0e09kmn9vehevua70imppmzn1ipltwvfkb0lylj2nbv7pud03ylel9wyvizbk136cd9iu3gz35cwqx6lrxvbhsm82iojs1ji8iul6dmizitttq77jkg3xnh33iplrzqcrii38zoxtt08ge3tfc4srofi22o1k3i6bez286199lgrz3fm00pdqwblnu5x59bm8g35bgxo3vbjblpkj9yl8evafa8sij6bezwnfeg279oegyqjbfc9dsn13b8s5sebvb8nu013epfa5py8xxr3vicx28u111e4dcctgmh9b81hpfn23ncplbksazpj9dir8xdnfijzuxqeoreun4t1jyaly165qp4uquamrmj17y00itoyg4vvhrj8jyf7jklwtalopckou4wqlc4fzz9gxzjlxq5vw9hoo5tkf9e1weqskxzxhnlzprf6cfb1o6hwa0pjsc77aqrxhofrgrmbbzjc7xoj07yhhi47a086 == \x\n\1\e\l\o\v\5\c\u\b\r\7\9\4\f\l\2\2\u\6\u\r\e\0\e\0\9\k\m\n\9\v\e\h\e\v\u\a\7\0\i\m\p\p\m\z\n\1\i\p\l\t\w\v\f\k\b\0\l\y\l\j\2\n\b\v\7\p\u\d\0\3\y\l\e\l\9\w\y\v\i\z\b\k\1\3\6\c\d\9\i\u\3\g\z\3\5\c\w\q\x\6\l\r\x\v\b\h\s\m\8\2\i\o\j\s\1\j\i\8\i\u\l\6\d\m\i\z\i\t\t\t\q\7\7\j\k\g\3\x\n\h\3\3\i\p\l\r\z\q\c\r\i\i\3\8\z\o\x\t\t\0\8\g\e\3\t\f\c\4\s\r\o\f\i\2\2\o\1\k\3\i\6\b\e\z\2\8\6\1\9\9\l\g\r\z\3\f\m\0\0\p\d\q\w\b\l\n\u\5\x\5\9\b\m\8\g\3\5\b\g\x\o\3\v\b\j\b\l\p\k\j\9\y\l\8\e\v\a\f\a\8\s\i\j\6\b\e\z\w\n\f\e\g\2\7\9\o\e\g\y\q\j\b\f\c\9\d\s\n\1\3\b\8\s\5\s\e\b\v\b\8\n\u\0\1\3\e\p\f\a\5\p\y\8\x\x\r\3\v\i\c\x\2\8\u\1\1\1\e\4\d\c\c\t\g\m\h\9\b\8\1\h\p\f\n\2\3\n\c\p\l\b\k\s\a\z\p\j\9\d\i\r\8\x\d\n\f\i\j\z\u\x\q\e\o\r\e\u\n\4\t\1\j\y\a\l\y\1\6\5\q\p\4\u\q\u\a\m\r\m\j\1\7\y\0\0\i\t\o\y\g\4\v\v\h\r\j\8\j\y\f\7\j\k\l\w\t\a\l\o\p\c\k\o\u\4\w\q\l\c\4\f\z\z\9\g\x\z\j\l\x\q\5\v\w\9\h\o\o\5\t\k\f\9\e\1\w\e\q\s\k\x\z\x\h\n\l\z\p\r\f\6\c\f\b\1\o\6\h\w\a\0\p\j\s\c\7\7\a\q\r\x\h\o\f\r\g\r\m\b\b\z\j\c\7\x\o\j\0\7\y\h\h\i\4\7\a\0\8\6 ]] 00:07:48.488 05:25:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.488 05:25:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:48.488 [2024-12-16 05:25:28.517725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:48.488 [2024-12-16 05:25:28.517909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63829 ] 00:07:48.488 [2024-12-16 05:25:28.692706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.747 [2024-12-16 05:25:28.780828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.747 [2024-12-16 05:25:28.927814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.007  [2024-12-16T05:25:30.201Z] Copying: 512/512 [B] (average 250 kBps) 00:07:49.942 00:07:49.942 05:25:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xn1elov5cubr794fl22u6ure0e09kmn9vehevua70imppmzn1ipltwvfkb0lylj2nbv7pud03ylel9wyvizbk136cd9iu3gz35cwqx6lrxvbhsm82iojs1ji8iul6dmizitttq77jkg3xnh33iplrzqcrii38zoxtt08ge3tfc4srofi22o1k3i6bez286199lgrz3fm00pdqwblnu5x59bm8g35bgxo3vbjblpkj9yl8evafa8sij6bezwnfeg279oegyqjbfc9dsn13b8s5sebvb8nu013epfa5py8xxr3vicx28u111e4dcctgmh9b81hpfn23ncplbksazpj9dir8xdnfijzuxqeoreun4t1jyaly165qp4uquamrmj17y00itoyg4vvhrj8jyf7jklwtalopckou4wqlc4fzz9gxzjlxq5vw9hoo5tkf9e1weqskxzxhnlzprf6cfb1o6hwa0pjsc77aqrxhofrgrmbbzjc7xoj07yhhi47a086 == \x\n\1\e\l\o\v\5\c\u\b\r\7\9\4\f\l\2\2\u\6\u\r\e\0\e\0\9\k\m\n\9\v\e\h\e\v\u\a\7\0\i\m\p\p\m\z\n\1\i\p\l\t\w\v\f\k\b\0\l\y\l\j\2\n\b\v\7\p\u\d\0\3\y\l\e\l\9\w\y\v\i\z\b\k\1\3\6\c\d\9\i\u\3\g\z\3\5\c\w\q\x\6\l\r\x\v\b\h\s\m\8\2\i\o\j\s\1\j\i\8\i\u\l\6\d\m\i\z\i\t\t\t\q\7\7\j\k\g\3\x\n\h\3\3\i\p\l\r\z\q\c\r\i\i\3\8\z\o\x\t\t\0\8\g\e\3\t\f\c\4\s\r\o\f\i\2\2\o\1\k\3\i\6\b\e\z\2\8\6\1\9\9\l\g\r\z\3\f\m\0\0\p\d\q\w\b\l\n\u\5\x\5\9\b\m\8\g\3\5\b\g\x\o\3\v\b\j\b\l\p\k\j\9\y\l\8\e\v\a\f\a\8\s\i\j\6\b\e\z\w\n\f\e\g\2\7\9\o\e\g\y\q\j\b\f\c\9\d\s\n\1\3\b\8\s\5\s\e\b\v\b\8\n\u\0\1\3\e\p\f\a\5\p\y\8\x\x\r\3\v\i\c\x\2\8\u\1\1\1\e\4\d\c\c\t\g\m\h\9\b\8\1\h\p\f\n\2\3\n\c\p\l\b\k\s\a\z\p\j\9\d\i\r\8\x\d\n\f\i\j\z\u\x\q\e\o\r\e\u\n\4\t\1\j\y\a\l\y\1\6\5\q\p\4\u\q\u\a\m\r\m\j\1\7\y\0\0\i\t\o\y\g\4\v\v\h\r\j\8\j\y\f\7\j\k\l\w\t\a\l\o\p\c\k\o\u\4\w\q\l\c\4\f\z\z\9\g\x\z\j\l\x\q\5\v\w\9\h\o\o\5\t\k\f\9\e\1\w\e\q\s\k\x\z\x\h\n\l\z\p\r\f\6\c\f\b\1\o\6\h\w\a\0\p\j\s\c\7\7\a\q\r\x\h\o\f\r\g\r\m\b\b\z\j\c\7\x\o\j\0\7\y\h\h\i\4\7\a\0\8\6 ]] 00:07:49.942 05:25:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:49.942 05:25:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:49.942 [2024-12-16 05:25:29.972583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:49.942 [2024-12-16 05:25:29.972773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63856 ] 00:07:49.942 [2024-12-16 05:25:30.136930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.201 [2024-12-16 05:25:30.221567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.201 [2024-12-16 05:25:30.386214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.460  [2024-12-16T05:25:31.655Z] Copying: 512/512 [B] (average 250 kBps) 00:07:51.396 00:07:51.396 ************************************ 00:07:51.396 END TEST dd_flags_misc 00:07:51.396 ************************************ 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xn1elov5cubr794fl22u6ure0e09kmn9vehevua70imppmzn1ipltwvfkb0lylj2nbv7pud03ylel9wyvizbk136cd9iu3gz35cwqx6lrxvbhsm82iojs1ji8iul6dmizitttq77jkg3xnh33iplrzqcrii38zoxtt08ge3tfc4srofi22o1k3i6bez286199lgrz3fm00pdqwblnu5x59bm8g35bgxo3vbjblpkj9yl8evafa8sij6bezwnfeg279oegyqjbfc9dsn13b8s5sebvb8nu013epfa5py8xxr3vicx28u111e4dcctgmh9b81hpfn23ncplbksazpj9dir8xdnfijzuxqeoreun4t1jyaly165qp4uquamrmj17y00itoyg4vvhrj8jyf7jklwtalopckou4wqlc4fzz9gxzjlxq5vw9hoo5tkf9e1weqskxzxhnlzprf6cfb1o6hwa0pjsc77aqrxhofrgrmbbzjc7xoj07yhhi47a086 == \x\n\1\e\l\o\v\5\c\u\b\r\7\9\4\f\l\2\2\u\6\u\r\e\0\e\0\9\k\m\n\9\v\e\h\e\v\u\a\7\0\i\m\p\p\m\z\n\1\i\p\l\t\w\v\f\k\b\0\l\y\l\j\2\n\b\v\7\p\u\d\0\3\y\l\e\l\9\w\y\v\i\z\b\k\1\3\6\c\d\9\i\u\3\g\z\3\5\c\w\q\x\6\l\r\x\v\b\h\s\m\8\2\i\o\j\s\1\j\i\8\i\u\l\6\d\m\i\z\i\t\t\t\q\7\7\j\k\g\3\x\n\h\3\3\i\p\l\r\z\q\c\r\i\i\3\8\z\o\x\t\t\0\8\g\e\3\t\f\c\4\s\r\o\f\i\2\2\o\1\k\3\i\6\b\e\z\2\8\6\1\9\9\l\g\r\z\3\f\m\0\0\p\d\q\w\b\l\n\u\5\x\5\9\b\m\8\g\3\5\b\g\x\o\3\v\b\j\b\l\p\k\j\9\y\l\8\e\v\a\f\a\8\s\i\j\6\b\e\z\w\n\f\e\g\2\7\9\o\e\g\y\q\j\b\f\c\9\d\s\n\1\3\b\8\s\5\s\e\b\v\b\8\n\u\0\1\3\e\p\f\a\5\p\y\8\x\x\r\3\v\i\c\x\2\8\u\1\1\1\e\4\d\c\c\t\g\m\h\9\b\8\1\h\p\f\n\2\3\n\c\p\l\b\k\s\a\z\p\j\9\d\i\r\8\x\d\n\f\i\j\z\u\x\q\e\o\r\e\u\n\4\t\1\j\y\a\l\y\1\6\5\q\p\4\u\q\u\a\m\r\m\j\1\7\y\0\0\i\t\o\y\g\4\v\v\h\r\j\8\j\y\f\7\j\k\l\w\t\a\l\o\p\c\k\o\u\4\w\q\l\c\4\f\z\z\9\g\x\z\j\l\x\q\5\v\w\9\h\o\o\5\t\k\f\9\e\1\w\e\q\s\k\x\z\x\h\n\l\z\p\r\f\6\c\f\b\1\o\6\h\w\a\0\p\j\s\c\7\7\a\q\r\x\h\o\f\r\g\r\m\b\b\z\j\c\7\x\o\j\0\7\y\h\h\i\4\7\a\0\8\6 ]] 00:07:51.397 00:07:51.397 real 0m11.930s 00:07:51.397 user 0m9.545s 00:07:51.397 sys 0m6.624s 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:51.397 * Second test run, disabling liburing, forcing AIO 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.397 ************************************ 00:07:51.397 START TEST dd_flag_append_forced_aio 00:07:51.397 ************************************ 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=m69n6a7nboxp95u1lgx9fksqm9f8n0ap 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=b5n3mln6nyu4f9w5by2kudipye7qkoc3 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s m69n6a7nboxp95u1lgx9fksqm9f8n0ap 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s b5n3mln6nyu4f9w5by2kudipye7qkoc3 00:07:51.397 05:25:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:51.397 [2024-12-16 05:25:31.499367] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:51.397 [2024-12-16 05:25:31.499523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63891 ] 00:07:51.655 [2024-12-16 05:25:31.665976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.655 [2024-12-16 05:25:31.760121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.914 [2024-12-16 05:25:31.918915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.914  [2024-12-16T05:25:33.108Z] Copying: 32/32 [B] (average 31 kBps) 00:07:52.849 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ b5n3mln6nyu4f9w5by2kudipye7qkoc3m69n6a7nboxp95u1lgx9fksqm9f8n0ap == \b\5\n\3\m\l\n\6\n\y\u\4\f\9\w\5\b\y\2\k\u\d\i\p\y\e\7\q\k\o\c\3\m\6\9\n\6\a\7\n\b\o\x\p\9\5\u\1\l\g\x\9\f\k\s\q\m\9\f\8\n\0\a\p ]] 00:07:52.849 00:07:52.849 real 0m1.479s 00:07:52.849 user 0m1.185s 00:07:52.849 sys 0m0.174s 00:07:52.849 ************************************ 00:07:52.849 END TEST dd_flag_append_forced_aio 00:07:52.849 ************************************ 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:52.849 ************************************ 00:07:52.849 START TEST dd_flag_directory_forced_aio 00:07:52.849 ************************************ 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.849 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.850 05:25:32 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.850 [2024-12-16 05:25:33.014421] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:52.850 [2024-12-16 05:25:33.014569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63935 ] 00:07:53.109 [2024-12-16 05:25:33.169231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.109 [2024-12-16 05:25:33.251658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.367 [2024-12-16 05:25:33.408357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.367 [2024-12-16 05:25:33.510181] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:53.367 [2024-12-16 05:25:33.510252] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:53.367 [2024-12-16 05:25:33.510274] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.934 [2024-12-16 05:25:34.139175] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:54.193 05:25:34 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:54.193 [2024-12-16 05:25:34.447788] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:54.193 [2024-12-16 05:25:34.447947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63951 ] 00:07:54.452 [2024-12-16 05:25:34.614588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.452 [2024-12-16 05:25:34.696508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.710 [2024-12-16 05:25:34.852562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.710 [2024-12-16 05:25:34.948121] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:54.710 [2024-12-16 05:25:34.948197] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:54.710 [2024-12-16 05:25:34.948219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.278 [2024-12-16 05:25:35.521406] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.537 00:07:55.537 real 0m2.818s 00:07:55.537 user 0m2.256s 00:07:55.537 sys 0m0.344s 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.537 ************************************ 00:07:55.537 END TEST dd_flag_directory_forced_aio 00:07:55.537 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:55.537 ************************************ 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:55.795 ************************************ 00:07:55.795 START TEST dd_flag_nofollow_forced_aio 00:07:55.795 ************************************ 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.795 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.796 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.796 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.796 05:25:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.796 [2024-12-16 05:25:35.922350] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:55.796 [2024-12-16 05:25:35.922728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63986 ] 00:07:56.054 [2024-12-16 05:25:36.101250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.054 [2024-12-16 05:25:36.183923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.313 [2024-12-16 05:25:36.327583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.313 [2024-12-16 05:25:36.411984] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:56.313 [2024-12-16 05:25:36.412060] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:56.313 [2024-12-16 05:25:36.412083] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.880 [2024-12-16 05:25:37.007953] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.139 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.140 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.140 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.140 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.140 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.140 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.140 05:25:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:57.140 [2024-12-16 05:25:37.358368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:57.140 [2024-12-16 05:25:37.358666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64013 ] 00:07:57.399 [2024-12-16 05:25:37.545912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.399 [2024-12-16 05:25:37.635277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.658 [2024-12-16 05:25:37.791741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.658 [2024-12-16 05:25:37.884379] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:57.658 [2024-12-16 05:25:37.884471] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:57.658 [2024-12-16 05:25:37.884494] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.596 [2024-12-16 05:25:38.484599] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.596 05:25:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.596 [2024-12-16 05:25:38.828647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:58.596 [2024-12-16 05:25:38.828892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64027 ] 00:07:58.894 [2024-12-16 05:25:39.005116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.894 [2024-12-16 05:25:39.096125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.153 [2024-12-16 05:25:39.258671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.153  [2024-12-16T05:25:40.349Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.090 00:08:00.090 ************************************ 00:08:00.090 END TEST dd_flag_nofollow_forced_aio 00:08:00.090 ************************************ 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ sklxvvae59112u8wr76mwvhgogkp7jn1vpxgejuhqlnwptk63jg0f523bclatshtwdkk7urmqye52g2t2y5dkd9rwoa6z0gflkblvicdvvml5mawpvd8zeffef8gs9w3jecgx2sfbmlucivfkkv64da3e0atcvz561l21qsjqwixl8qhfo6u58g19vjtg85nb511og0d2ovlic698i7ou2prlhl6t4jzduuusx8vkzt1gvt1ny9b2k18q93yplq7k55bn9x0ovjq6ep68aohial6dzjehi789vlap0674bpzdp1jwia09clmgvpizpw9eue22uf55q8r46l1iutj9igl9cvy7053egiwlt7n9usknd7uxxvkyhhi2chee1q63e7vzs7j7w20xjp2hqylab2leva77wxk8atqks5x1befbzwq0azanguxwsnq4ilrbhbaxxicrvs74g41edjnytxbxeux8m1socxn4a9fcr7b7jrwt2dxatxuna4c0duw == \s\k\l\x\v\v\a\e\5\9\1\1\2\u\8\w\r\7\6\m\w\v\h\g\o\g\k\p\7\j\n\1\v\p\x\g\e\j\u\h\q\l\n\w\p\t\k\6\3\j\g\0\f\5\2\3\b\c\l\a\t\s\h\t\w\d\k\k\7\u\r\m\q\y\e\5\2\g\2\t\2\y\5\d\k\d\9\r\w\o\a\6\z\0\g\f\l\k\b\l\v\i\c\d\v\v\m\l\5\m\a\w\p\v\d\8\z\e\f\f\e\f\8\g\s\9\w\3\j\e\c\g\x\2\s\f\b\m\l\u\c\i\v\f\k\k\v\6\4\d\a\3\e\0\a\t\c\v\z\5\6\1\l\2\1\q\s\j\q\w\i\x\l\8\q\h\f\o\6\u\5\8\g\1\9\v\j\t\g\8\5\n\b\5\1\1\o\g\0\d\2\o\v\l\i\c\6\9\8\i\7\o\u\2\p\r\l\h\l\6\t\4\j\z\d\u\u\u\s\x\8\v\k\z\t\1\g\v\t\1\n\y\9\b\2\k\1\8\q\9\3\y\p\l\q\7\k\5\5\b\n\9\x\0\o\v\j\q\6\e\p\6\8\a\o\h\i\a\l\6\d\z\j\e\h\i\7\8\9\v\l\a\p\0\6\7\4\b\p\z\d\p\1\j\w\i\a\0\9\c\l\m\g\v\p\i\z\p\w\9\e\u\e\2\2\u\f\5\5\q\8\r\4\6\l\1\i\u\t\j\9\i\g\l\9\c\v\y\7\0\5\3\e\g\i\w\l\t\7\n\9\u\s\k\n\d\7\u\x\x\v\k\y\h\h\i\2\c\h\e\e\1\q\6\3\e\7\v\z\s\7\j\7\w\2\0\x\j\p\2\h\q\y\l\a\b\2\l\e\v\a\7\7\w\x\k\8\a\t\q\k\s\5\x\1\b\e\f\b\z\w\q\0\a\z\a\n\g\u\x\w\s\n\q\4\i\l\r\b\h\b\a\x\x\i\c\r\v\s\7\4\g\4\1\e\d\j\n\y\t\x\b\x\e\u\x\8\m\1\s\o\c\x\n\4\a\9\f\c\r\7\b\7\j\r\w\t\2\d\x\a\t\x\u\n\a\4\c\0\d\u\w ]] 00:08:00.090 00:08:00.090 real 0m4.360s 00:08:00.090 user 0m3.435s 00:08:00.090 sys 0m0.578s 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.090 ************************************ 00:08:00.090 START TEST dd_flag_noatime_forced_aio 00:08:00.090 ************************************ 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1734326739 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1734326740 00:08:00.090 05:25:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:01.026 05:25:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.285 [2024-12-16 05:25:41.348756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:01.285 [2024-12-16 05:25:41.348930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64085 ] 00:08:01.285 [2024-12-16 05:25:41.526843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.543 [2024-12-16 05:25:41.613061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.543 [2024-12-16 05:25:41.765424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.802  [2024-12-16T05:25:42.997Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.738 00:08:02.738 05:25:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.738 05:25:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1734326739 )) 00:08:02.738 05:25:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.738 05:25:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1734326740 )) 00:08:02.738 05:25:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.738 [2024-12-16 05:25:42.865036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:02.738 [2024-12-16 05:25:42.865206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64104 ] 00:08:02.997 [2024-12-16 05:25:43.042908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.997 [2024-12-16 05:25:43.138929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.255 [2024-12-16 05:25:43.292727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.255  [2024-12-16T05:25:44.450Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.191 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.191 ************************************ 00:08:04.191 END TEST dd_flag_noatime_forced_aio 00:08:04.191 ************************************ 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1734326743 )) 00:08:04.191 00:08:04.191 real 0m4.032s 00:08:04.191 user 0m2.397s 00:08:04.191 sys 0m0.390s 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:04.191 ************************************ 00:08:04.191 START TEST dd_flags_misc_forced_aio 00:08:04.191 ************************************ 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:04.191 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:04.192 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:04.192 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:04.192 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:04.192 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:04.192 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.192 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.192 05:25:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:04.192 [2024-12-16 05:25:44.443376] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:04.192 [2024-12-16 05:25:44.443749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:08:04.451 [2024-12-16 05:25:44.621313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.451 [2024-12-16 05:25:44.703785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.709 [2024-12-16 05:25:44.846587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.709  [2024-12-16T05:25:45.902Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.643 00:08:05.643 05:25:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gphluc0oy3x617fmtafzf0wv1okxhhy4ge3a11y5woueuj8r6190bg2o4zerv2qlv3p2p22h0w3ngvnop9a8a10idgfvcovv0dhoqwn6wmva9jibsh35kvreh82hpybxkomp0omrky4kbhd39u1hkbjgri55cun30943pxkjwa8wbvur4j9194qkw4lepss8w56zrdkwuwuczp6t68aq3rnysue5tsy5757uem6v6r77oo064zfog3exghu6iat5cefjnd8dj03npwp9sqn77qqkjcg0huco2m66jzyr0bhmf8lvcv27abem6kvk5psqn3pafhnfz9shygcj96na7eud6fwef4cykblcpz8ka63h0f0pxs9oymb93qy1nzgmdnbzz35phzpecl1s7mky9oreegtopffgx5pdn1131syik9qdxdke5n7k53mopl9krm5lqr52izdl2gybkycfm48gmmduf3lad4xrdx3prf1cbv7w8uyfj6jwqw5i22hr == \g\p\h\l\u\c\0\o\y\3\x\6\1\7\f\m\t\a\f\z\f\0\w\v\1\o\k\x\h\h\y\4\g\e\3\a\1\1\y\5\w\o\u\e\u\j\8\r\6\1\9\0\b\g\2\o\4\z\e\r\v\2\q\l\v\3\p\2\p\2\2\h\0\w\3\n\g\v\n\o\p\9\a\8\a\1\0\i\d\g\f\v\c\o\v\v\0\d\h\o\q\w\n\6\w\m\v\a\9\j\i\b\s\h\3\5\k\v\r\e\h\8\2\h\p\y\b\x\k\o\m\p\0\o\m\r\k\y\4\k\b\h\d\3\9\u\1\h\k\b\j\g\r\i\5\5\c\u\n\3\0\9\4\3\p\x\k\j\w\a\8\w\b\v\u\r\4\j\9\1\9\4\q\k\w\4\l\e\p\s\s\8\w\5\6\z\r\d\k\w\u\w\u\c\z\p\6\t\6\8\a\q\3\r\n\y\s\u\e\5\t\s\y\5\7\5\7\u\e\m\6\v\6\r\7\7\o\o\0\6\4\z\f\o\g\3\e\x\g\h\u\6\i\a\t\5\c\e\f\j\n\d\8\d\j\0\3\n\p\w\p\9\s\q\n\7\7\q\q\k\j\c\g\0\h\u\c\o\2\m\6\6\j\z\y\r\0\b\h\m\f\8\l\v\c\v\2\7\a\b\e\m\6\k\v\k\5\p\s\q\n\3\p\a\f\h\n\f\z\9\s\h\y\g\c\j\9\6\n\a\7\e\u\d\6\f\w\e\f\4\c\y\k\b\l\c\p\z\8\k\a\6\3\h\0\f\0\p\x\s\9\o\y\m\b\9\3\q\y\1\n\z\g\m\d\n\b\z\z\3\5\p\h\z\p\e\c\l\1\s\7\m\k\y\9\o\r\e\e\g\t\o\p\f\f\g\x\5\p\d\n\1\1\3\1\s\y\i\k\9\q\d\x\d\k\e\5\n\7\k\5\3\m\o\p\l\9\k\r\m\5\l\q\r\5\2\i\z\d\l\2\g\y\b\k\y\c\f\m\4\8\g\m\m\d\u\f\3\l\a\d\4\x\r\d\x\3\p\r\f\1\c\b\v\7\w\8\u\y\f\j\6\j\w\q\w\5\i\2\2\h\r ]] 00:08:05.643 05:25:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.643 05:25:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:05.902 [2024-12-16 05:25:45.906999] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:05.902 [2024-12-16 05:25:45.907192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64162 ] 00:08:05.902 [2024-12-16 05:25:46.085723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.161 [2024-12-16 05:25:46.173810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.161 [2024-12-16 05:25:46.321901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.420  [2024-12-16T05:25:47.247Z] Copying: 512/512 [B] (average 500 kBps) 00:08:06.988 00:08:06.988 05:25:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gphluc0oy3x617fmtafzf0wv1okxhhy4ge3a11y5woueuj8r6190bg2o4zerv2qlv3p2p22h0w3ngvnop9a8a10idgfvcovv0dhoqwn6wmva9jibsh35kvreh82hpybxkomp0omrky4kbhd39u1hkbjgri55cun30943pxkjwa8wbvur4j9194qkw4lepss8w56zrdkwuwuczp6t68aq3rnysue5tsy5757uem6v6r77oo064zfog3exghu6iat5cefjnd8dj03npwp9sqn77qqkjcg0huco2m66jzyr0bhmf8lvcv27abem6kvk5psqn3pafhnfz9shygcj96na7eud6fwef4cykblcpz8ka63h0f0pxs9oymb93qy1nzgmdnbzz35phzpecl1s7mky9oreegtopffgx5pdn1131syik9qdxdke5n7k53mopl9krm5lqr52izdl2gybkycfm48gmmduf3lad4xrdx3prf1cbv7w8uyfj6jwqw5i22hr == \g\p\h\l\u\c\0\o\y\3\x\6\1\7\f\m\t\a\f\z\f\0\w\v\1\o\k\x\h\h\y\4\g\e\3\a\1\1\y\5\w\o\u\e\u\j\8\r\6\1\9\0\b\g\2\o\4\z\e\r\v\2\q\l\v\3\p\2\p\2\2\h\0\w\3\n\g\v\n\o\p\9\a\8\a\1\0\i\d\g\f\v\c\o\v\v\0\d\h\o\q\w\n\6\w\m\v\a\9\j\i\b\s\h\3\5\k\v\r\e\h\8\2\h\p\y\b\x\k\o\m\p\0\o\m\r\k\y\4\k\b\h\d\3\9\u\1\h\k\b\j\g\r\i\5\5\c\u\n\3\0\9\4\3\p\x\k\j\w\a\8\w\b\v\u\r\4\j\9\1\9\4\q\k\w\4\l\e\p\s\s\8\w\5\6\z\r\d\k\w\u\w\u\c\z\p\6\t\6\8\a\q\3\r\n\y\s\u\e\5\t\s\y\5\7\5\7\u\e\m\6\v\6\r\7\7\o\o\0\6\4\z\f\o\g\3\e\x\g\h\u\6\i\a\t\5\c\e\f\j\n\d\8\d\j\0\3\n\p\w\p\9\s\q\n\7\7\q\q\k\j\c\g\0\h\u\c\o\2\m\6\6\j\z\y\r\0\b\h\m\f\8\l\v\c\v\2\7\a\b\e\m\6\k\v\k\5\p\s\q\n\3\p\a\f\h\n\f\z\9\s\h\y\g\c\j\9\6\n\a\7\e\u\d\6\f\w\e\f\4\c\y\k\b\l\c\p\z\8\k\a\6\3\h\0\f\0\p\x\s\9\o\y\m\b\9\3\q\y\1\n\z\g\m\d\n\b\z\z\3\5\p\h\z\p\e\c\l\1\s\7\m\k\y\9\o\r\e\e\g\t\o\p\f\f\g\x\5\p\d\n\1\1\3\1\s\y\i\k\9\q\d\x\d\k\e\5\n\7\k\5\3\m\o\p\l\9\k\r\m\5\l\q\r\5\2\i\z\d\l\2\g\y\b\k\y\c\f\m\4\8\g\m\m\d\u\f\3\l\a\d\4\x\r\d\x\3\p\r\f\1\c\b\v\7\w\8\u\y\f\j\6\j\w\q\w\5\i\2\2\h\r ]] 00:08:06.988 05:25:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:06.988 05:25:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:07.247 [2024-12-16 05:25:47.346056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:07.247 [2024-12-16 05:25:47.346223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64182 ] 00:08:07.506 [2024-12-16 05:25:47.527414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.506 [2024-12-16 05:25:47.615217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.506 [2024-12-16 05:25:47.762345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.765  [2024-12-16T05:25:48.959Z] Copying: 512/512 [B] (average 250 kBps) 00:08:08.700 00:08:08.700 05:25:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gphluc0oy3x617fmtafzf0wv1okxhhy4ge3a11y5woueuj8r6190bg2o4zerv2qlv3p2p22h0w3ngvnop9a8a10idgfvcovv0dhoqwn6wmva9jibsh35kvreh82hpybxkomp0omrky4kbhd39u1hkbjgri55cun30943pxkjwa8wbvur4j9194qkw4lepss8w56zrdkwuwuczp6t68aq3rnysue5tsy5757uem6v6r77oo064zfog3exghu6iat5cefjnd8dj03npwp9sqn77qqkjcg0huco2m66jzyr0bhmf8lvcv27abem6kvk5psqn3pafhnfz9shygcj96na7eud6fwef4cykblcpz8ka63h0f0pxs9oymb93qy1nzgmdnbzz35phzpecl1s7mky9oreegtopffgx5pdn1131syik9qdxdke5n7k53mopl9krm5lqr52izdl2gybkycfm48gmmduf3lad4xrdx3prf1cbv7w8uyfj6jwqw5i22hr == \g\p\h\l\u\c\0\o\y\3\x\6\1\7\f\m\t\a\f\z\f\0\w\v\1\o\k\x\h\h\y\4\g\e\3\a\1\1\y\5\w\o\u\e\u\j\8\r\6\1\9\0\b\g\2\o\4\z\e\r\v\2\q\l\v\3\p\2\p\2\2\h\0\w\3\n\g\v\n\o\p\9\a\8\a\1\0\i\d\g\f\v\c\o\v\v\0\d\h\o\q\w\n\6\w\m\v\a\9\j\i\b\s\h\3\5\k\v\r\e\h\8\2\h\p\y\b\x\k\o\m\p\0\o\m\r\k\y\4\k\b\h\d\3\9\u\1\h\k\b\j\g\r\i\5\5\c\u\n\3\0\9\4\3\p\x\k\j\w\a\8\w\b\v\u\r\4\j\9\1\9\4\q\k\w\4\l\e\p\s\s\8\w\5\6\z\r\d\k\w\u\w\u\c\z\p\6\t\6\8\a\q\3\r\n\y\s\u\e\5\t\s\y\5\7\5\7\u\e\m\6\v\6\r\7\7\o\o\0\6\4\z\f\o\g\3\e\x\g\h\u\6\i\a\t\5\c\e\f\j\n\d\8\d\j\0\3\n\p\w\p\9\s\q\n\7\7\q\q\k\j\c\g\0\h\u\c\o\2\m\6\6\j\z\y\r\0\b\h\m\f\8\l\v\c\v\2\7\a\b\e\m\6\k\v\k\5\p\s\q\n\3\p\a\f\h\n\f\z\9\s\h\y\g\c\j\9\6\n\a\7\e\u\d\6\f\w\e\f\4\c\y\k\b\l\c\p\z\8\k\a\6\3\h\0\f\0\p\x\s\9\o\y\m\b\9\3\q\y\1\n\z\g\m\d\n\b\z\z\3\5\p\h\z\p\e\c\l\1\s\7\m\k\y\9\o\r\e\e\g\t\o\p\f\f\g\x\5\p\d\n\1\1\3\1\s\y\i\k\9\q\d\x\d\k\e\5\n\7\k\5\3\m\o\p\l\9\k\r\m\5\l\q\r\5\2\i\z\d\l\2\g\y\b\k\y\c\f\m\4\8\g\m\m\d\u\f\3\l\a\d\4\x\r\d\x\3\p\r\f\1\c\b\v\7\w\8\u\y\f\j\6\j\w\q\w\5\i\2\2\h\r ]] 00:08:08.700 05:25:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:08.700 05:25:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:08.700 [2024-12-16 05:25:48.850394] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:08.700 [2024-12-16 05:25:48.850583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64201 ] 00:08:08.959 [2024-12-16 05:25:49.031824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.959 [2024-12-16 05:25:49.123629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.219 [2024-12-16 05:25:49.275788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.219  [2024-12-16T05:25:50.428Z] Copying: 512/512 [B] (average 250 kBps) 00:08:10.169 00:08:10.169 05:25:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gphluc0oy3x617fmtafzf0wv1okxhhy4ge3a11y5woueuj8r6190bg2o4zerv2qlv3p2p22h0w3ngvnop9a8a10idgfvcovv0dhoqwn6wmva9jibsh35kvreh82hpybxkomp0omrky4kbhd39u1hkbjgri55cun30943pxkjwa8wbvur4j9194qkw4lepss8w56zrdkwuwuczp6t68aq3rnysue5tsy5757uem6v6r77oo064zfog3exghu6iat5cefjnd8dj03npwp9sqn77qqkjcg0huco2m66jzyr0bhmf8lvcv27abem6kvk5psqn3pafhnfz9shygcj96na7eud6fwef4cykblcpz8ka63h0f0pxs9oymb93qy1nzgmdnbzz35phzpecl1s7mky9oreegtopffgx5pdn1131syik9qdxdke5n7k53mopl9krm5lqr52izdl2gybkycfm48gmmduf3lad4xrdx3prf1cbv7w8uyfj6jwqw5i22hr == \g\p\h\l\u\c\0\o\y\3\x\6\1\7\f\m\t\a\f\z\f\0\w\v\1\o\k\x\h\h\y\4\g\e\3\a\1\1\y\5\w\o\u\e\u\j\8\r\6\1\9\0\b\g\2\o\4\z\e\r\v\2\q\l\v\3\p\2\p\2\2\h\0\w\3\n\g\v\n\o\p\9\a\8\a\1\0\i\d\g\f\v\c\o\v\v\0\d\h\o\q\w\n\6\w\m\v\a\9\j\i\b\s\h\3\5\k\v\r\e\h\8\2\h\p\y\b\x\k\o\m\p\0\o\m\r\k\y\4\k\b\h\d\3\9\u\1\h\k\b\j\g\r\i\5\5\c\u\n\3\0\9\4\3\p\x\k\j\w\a\8\w\b\v\u\r\4\j\9\1\9\4\q\k\w\4\l\e\p\s\s\8\w\5\6\z\r\d\k\w\u\w\u\c\z\p\6\t\6\8\a\q\3\r\n\y\s\u\e\5\t\s\y\5\7\5\7\u\e\m\6\v\6\r\7\7\o\o\0\6\4\z\f\o\g\3\e\x\g\h\u\6\i\a\t\5\c\e\f\j\n\d\8\d\j\0\3\n\p\w\p\9\s\q\n\7\7\q\q\k\j\c\g\0\h\u\c\o\2\m\6\6\j\z\y\r\0\b\h\m\f\8\l\v\c\v\2\7\a\b\e\m\6\k\v\k\5\p\s\q\n\3\p\a\f\h\n\f\z\9\s\h\y\g\c\j\9\6\n\a\7\e\u\d\6\f\w\e\f\4\c\y\k\b\l\c\p\z\8\k\a\6\3\h\0\f\0\p\x\s\9\o\y\m\b\9\3\q\y\1\n\z\g\m\d\n\b\z\z\3\5\p\h\z\p\e\c\l\1\s\7\m\k\y\9\o\r\e\e\g\t\o\p\f\f\g\x\5\p\d\n\1\1\3\1\s\y\i\k\9\q\d\x\d\k\e\5\n\7\k\5\3\m\o\p\l\9\k\r\m\5\l\q\r\5\2\i\z\d\l\2\g\y\b\k\y\c\f\m\4\8\g\m\m\d\u\f\3\l\a\d\4\x\r\d\x\3\p\r\f\1\c\b\v\7\w\8\u\y\f\j\6\j\w\q\w\5\i\2\2\h\r ]] 00:08:10.169 05:25:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:10.169 05:25:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:10.169 05:25:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:10.169 05:25:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:10.169 05:25:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.170 05:25:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:10.170 [2024-12-16 05:25:50.324271] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:10.170 [2024-12-16 05:25:50.324656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64215 ] 00:08:10.429 [2024-12-16 05:25:50.501735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.429 [2024-12-16 05:25:50.598886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.688 [2024-12-16 05:25:50.751433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.688  [2024-12-16T05:25:51.885Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.626 00:08:11.626 05:25:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hp8dkortflslruspyonnttgakzd35jcej9t2sr3wiwweduexy8to2l5d3r1xm5o1rdb4hsbivjlbspukrougot1z725m16iv4bcuf18emq3irbyyyzsu61w35kvpbgwa2m0qhcbclf8k515tvrkvw4s64iwxrt2dejh8vrtccu4qmazrd126bjxeorrr51n3vy31nta4ywm1kcdrzdw49khdqzug2zwua5m6b355yi5jhpo33woy8uuu4ng4jq0u17c8itt0iacx37efsnszbmrh9373modnfh1su68v6afigreheryyym7krj55qnx3c92oklkiu310tcuil8ek654p7rlxenpbotpih98j73g5klgo3ws7jknzw2iaqaf927ark3su1b0kg8l5v8yl1xbjxs7d6yjs0equu7jq59flnfrwuhdsav6ldh2rei42abhrvb2p5jxq6vrp1hoqtzqjv74slrnb8s3k597wcnwqb7zj0g7j5mwuquzbup4i == \h\p\8\d\k\o\r\t\f\l\s\l\r\u\s\p\y\o\n\n\t\t\g\a\k\z\d\3\5\j\c\e\j\9\t\2\s\r\3\w\i\w\w\e\d\u\e\x\y\8\t\o\2\l\5\d\3\r\1\x\m\5\o\1\r\d\b\4\h\s\b\i\v\j\l\b\s\p\u\k\r\o\u\g\o\t\1\z\7\2\5\m\1\6\i\v\4\b\c\u\f\1\8\e\m\q\3\i\r\b\y\y\y\z\s\u\6\1\w\3\5\k\v\p\b\g\w\a\2\m\0\q\h\c\b\c\l\f\8\k\5\1\5\t\v\r\k\v\w\4\s\6\4\i\w\x\r\t\2\d\e\j\h\8\v\r\t\c\c\u\4\q\m\a\z\r\d\1\2\6\b\j\x\e\o\r\r\r\5\1\n\3\v\y\3\1\n\t\a\4\y\w\m\1\k\c\d\r\z\d\w\4\9\k\h\d\q\z\u\g\2\z\w\u\a\5\m\6\b\3\5\5\y\i\5\j\h\p\o\3\3\w\o\y\8\u\u\u\4\n\g\4\j\q\0\u\1\7\c\8\i\t\t\0\i\a\c\x\3\7\e\f\s\n\s\z\b\m\r\h\9\3\7\3\m\o\d\n\f\h\1\s\u\6\8\v\6\a\f\i\g\r\e\h\e\r\y\y\y\m\7\k\r\j\5\5\q\n\x\3\c\9\2\o\k\l\k\i\u\3\1\0\t\c\u\i\l\8\e\k\6\5\4\p\7\r\l\x\e\n\p\b\o\t\p\i\h\9\8\j\7\3\g\5\k\l\g\o\3\w\s\7\j\k\n\z\w\2\i\a\q\a\f\9\2\7\a\r\k\3\s\u\1\b\0\k\g\8\l\5\v\8\y\l\1\x\b\j\x\s\7\d\6\y\j\s\0\e\q\u\u\7\j\q\5\9\f\l\n\f\r\w\u\h\d\s\a\v\6\l\d\h\2\r\e\i\4\2\a\b\h\r\v\b\2\p\5\j\x\q\6\v\r\p\1\h\o\q\t\z\q\j\v\7\4\s\l\r\n\b\8\s\3\k\5\9\7\w\c\n\w\q\b\7\z\j\0\g\7\j\5\m\w\u\q\u\z\b\u\p\4\i ]] 00:08:11.626 05:25:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.626 05:25:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:11.626 [2024-12-16 05:25:51.770021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:11.626 [2024-12-16 05:25:51.770199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64235 ] 00:08:11.885 [2024-12-16 05:25:51.943724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.885 [2024-12-16 05:25:52.026345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.145 [2024-12-16 05:25:52.189603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.145  [2024-12-16T05:25:53.339Z] Copying: 512/512 [B] (average 500 kBps) 00:08:13.080 00:08:13.080 05:25:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hp8dkortflslruspyonnttgakzd35jcej9t2sr3wiwweduexy8to2l5d3r1xm5o1rdb4hsbivjlbspukrougot1z725m16iv4bcuf18emq3irbyyyzsu61w35kvpbgwa2m0qhcbclf8k515tvrkvw4s64iwxrt2dejh8vrtccu4qmazrd126bjxeorrr51n3vy31nta4ywm1kcdrzdw49khdqzug2zwua5m6b355yi5jhpo33woy8uuu4ng4jq0u17c8itt0iacx37efsnszbmrh9373modnfh1su68v6afigreheryyym7krj55qnx3c92oklkiu310tcuil8ek654p7rlxenpbotpih98j73g5klgo3ws7jknzw2iaqaf927ark3su1b0kg8l5v8yl1xbjxs7d6yjs0equu7jq59flnfrwuhdsav6ldh2rei42abhrvb2p5jxq6vrp1hoqtzqjv74slrnb8s3k597wcnwqb7zj0g7j5mwuquzbup4i == \h\p\8\d\k\o\r\t\f\l\s\l\r\u\s\p\y\o\n\n\t\t\g\a\k\z\d\3\5\j\c\e\j\9\t\2\s\r\3\w\i\w\w\e\d\u\e\x\y\8\t\o\2\l\5\d\3\r\1\x\m\5\o\1\r\d\b\4\h\s\b\i\v\j\l\b\s\p\u\k\r\o\u\g\o\t\1\z\7\2\5\m\1\6\i\v\4\b\c\u\f\1\8\e\m\q\3\i\r\b\y\y\y\z\s\u\6\1\w\3\5\k\v\p\b\g\w\a\2\m\0\q\h\c\b\c\l\f\8\k\5\1\5\t\v\r\k\v\w\4\s\6\4\i\w\x\r\t\2\d\e\j\h\8\v\r\t\c\c\u\4\q\m\a\z\r\d\1\2\6\b\j\x\e\o\r\r\r\5\1\n\3\v\y\3\1\n\t\a\4\y\w\m\1\k\c\d\r\z\d\w\4\9\k\h\d\q\z\u\g\2\z\w\u\a\5\m\6\b\3\5\5\y\i\5\j\h\p\o\3\3\w\o\y\8\u\u\u\4\n\g\4\j\q\0\u\1\7\c\8\i\t\t\0\i\a\c\x\3\7\e\f\s\n\s\z\b\m\r\h\9\3\7\3\m\o\d\n\f\h\1\s\u\6\8\v\6\a\f\i\g\r\e\h\e\r\y\y\y\m\7\k\r\j\5\5\q\n\x\3\c\9\2\o\k\l\k\i\u\3\1\0\t\c\u\i\l\8\e\k\6\5\4\p\7\r\l\x\e\n\p\b\o\t\p\i\h\9\8\j\7\3\g\5\k\l\g\o\3\w\s\7\j\k\n\z\w\2\i\a\q\a\f\9\2\7\a\r\k\3\s\u\1\b\0\k\g\8\l\5\v\8\y\l\1\x\b\j\x\s\7\d\6\y\j\s\0\e\q\u\u\7\j\q\5\9\f\l\n\f\r\w\u\h\d\s\a\v\6\l\d\h\2\r\e\i\4\2\a\b\h\r\v\b\2\p\5\j\x\q\6\v\r\p\1\h\o\q\t\z\q\j\v\7\4\s\l\r\n\b\8\s\3\k\5\9\7\w\c\n\w\q\b\7\z\j\0\g\7\j\5\m\w\u\q\u\z\b\u\p\4\i ]] 00:08:13.081 05:25:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:13.081 05:25:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:13.081 [2024-12-16 05:25:53.298643] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:13.081 [2024-12-16 05:25:53.298816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64254 ] 00:08:13.339 [2024-12-16 05:25:53.481540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.339 [2024-12-16 05:25:53.564743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.598 [2024-12-16 05:25:53.723881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.598  [2024-12-16T05:25:54.795Z] Copying: 512/512 [B] (average 250 kBps) 00:08:14.536 00:08:14.536 05:25:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hp8dkortflslruspyonnttgakzd35jcej9t2sr3wiwweduexy8to2l5d3r1xm5o1rdb4hsbivjlbspukrougot1z725m16iv4bcuf18emq3irbyyyzsu61w35kvpbgwa2m0qhcbclf8k515tvrkvw4s64iwxrt2dejh8vrtccu4qmazrd126bjxeorrr51n3vy31nta4ywm1kcdrzdw49khdqzug2zwua5m6b355yi5jhpo33woy8uuu4ng4jq0u17c8itt0iacx37efsnszbmrh9373modnfh1su68v6afigreheryyym7krj55qnx3c92oklkiu310tcuil8ek654p7rlxenpbotpih98j73g5klgo3ws7jknzw2iaqaf927ark3su1b0kg8l5v8yl1xbjxs7d6yjs0equu7jq59flnfrwuhdsav6ldh2rei42abhrvb2p5jxq6vrp1hoqtzqjv74slrnb8s3k597wcnwqb7zj0g7j5mwuquzbup4i == \h\p\8\d\k\o\r\t\f\l\s\l\r\u\s\p\y\o\n\n\t\t\g\a\k\z\d\3\5\j\c\e\j\9\t\2\s\r\3\w\i\w\w\e\d\u\e\x\y\8\t\o\2\l\5\d\3\r\1\x\m\5\o\1\r\d\b\4\h\s\b\i\v\j\l\b\s\p\u\k\r\o\u\g\o\t\1\z\7\2\5\m\1\6\i\v\4\b\c\u\f\1\8\e\m\q\3\i\r\b\y\y\y\z\s\u\6\1\w\3\5\k\v\p\b\g\w\a\2\m\0\q\h\c\b\c\l\f\8\k\5\1\5\t\v\r\k\v\w\4\s\6\4\i\w\x\r\t\2\d\e\j\h\8\v\r\t\c\c\u\4\q\m\a\z\r\d\1\2\6\b\j\x\e\o\r\r\r\5\1\n\3\v\y\3\1\n\t\a\4\y\w\m\1\k\c\d\r\z\d\w\4\9\k\h\d\q\z\u\g\2\z\w\u\a\5\m\6\b\3\5\5\y\i\5\j\h\p\o\3\3\w\o\y\8\u\u\u\4\n\g\4\j\q\0\u\1\7\c\8\i\t\t\0\i\a\c\x\3\7\e\f\s\n\s\z\b\m\r\h\9\3\7\3\m\o\d\n\f\h\1\s\u\6\8\v\6\a\f\i\g\r\e\h\e\r\y\y\y\m\7\k\r\j\5\5\q\n\x\3\c\9\2\o\k\l\k\i\u\3\1\0\t\c\u\i\l\8\e\k\6\5\4\p\7\r\l\x\e\n\p\b\o\t\p\i\h\9\8\j\7\3\g\5\k\l\g\o\3\w\s\7\j\k\n\z\w\2\i\a\q\a\f\9\2\7\a\r\k\3\s\u\1\b\0\k\g\8\l\5\v\8\y\l\1\x\b\j\x\s\7\d\6\y\j\s\0\e\q\u\u\7\j\q\5\9\f\l\n\f\r\w\u\h\d\s\a\v\6\l\d\h\2\r\e\i\4\2\a\b\h\r\v\b\2\p\5\j\x\q\6\v\r\p\1\h\o\q\t\z\q\j\v\7\4\s\l\r\n\b\8\s\3\k\5\9\7\w\c\n\w\q\b\7\z\j\0\g\7\j\5\m\w\u\q\u\z\b\u\p\4\i ]] 00:08:14.536 05:25:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.536 05:25:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:14.536 [2024-12-16 05:25:54.757267] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:14.536 [2024-12-16 05:25:54.757428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64274 ] 00:08:14.795 [2024-12-16 05:25:54.937000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.795 [2024-12-16 05:25:55.018769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.056 [2024-12-16 05:25:55.170218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.056  [2024-12-16T05:25:56.251Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.992 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hp8dkortflslruspyonnttgakzd35jcej9t2sr3wiwweduexy8to2l5d3r1xm5o1rdb4hsbivjlbspukrougot1z725m16iv4bcuf18emq3irbyyyzsu61w35kvpbgwa2m0qhcbclf8k515tvrkvw4s64iwxrt2dejh8vrtccu4qmazrd126bjxeorrr51n3vy31nta4ywm1kcdrzdw49khdqzug2zwua5m6b355yi5jhpo33woy8uuu4ng4jq0u17c8itt0iacx37efsnszbmrh9373modnfh1su68v6afigreheryyym7krj55qnx3c92oklkiu310tcuil8ek654p7rlxenpbotpih98j73g5klgo3ws7jknzw2iaqaf927ark3su1b0kg8l5v8yl1xbjxs7d6yjs0equu7jq59flnfrwuhdsav6ldh2rei42abhrvb2p5jxq6vrp1hoqtzqjv74slrnb8s3k597wcnwqb7zj0g7j5mwuquzbup4i == \h\p\8\d\k\o\r\t\f\l\s\l\r\u\s\p\y\o\n\n\t\t\g\a\k\z\d\3\5\j\c\e\j\9\t\2\s\r\3\w\i\w\w\e\d\u\e\x\y\8\t\o\2\l\5\d\3\r\1\x\m\5\o\1\r\d\b\4\h\s\b\i\v\j\l\b\s\p\u\k\r\o\u\g\o\t\1\z\7\2\5\m\1\6\i\v\4\b\c\u\f\1\8\e\m\q\3\i\r\b\y\y\y\z\s\u\6\1\w\3\5\k\v\p\b\g\w\a\2\m\0\q\h\c\b\c\l\f\8\k\5\1\5\t\v\r\k\v\w\4\s\6\4\i\w\x\r\t\2\d\e\j\h\8\v\r\t\c\c\u\4\q\m\a\z\r\d\1\2\6\b\j\x\e\o\r\r\r\5\1\n\3\v\y\3\1\n\t\a\4\y\w\m\1\k\c\d\r\z\d\w\4\9\k\h\d\q\z\u\g\2\z\w\u\a\5\m\6\b\3\5\5\y\i\5\j\h\p\o\3\3\w\o\y\8\u\u\u\4\n\g\4\j\q\0\u\1\7\c\8\i\t\t\0\i\a\c\x\3\7\e\f\s\n\s\z\b\m\r\h\9\3\7\3\m\o\d\n\f\h\1\s\u\6\8\v\6\a\f\i\g\r\e\h\e\r\y\y\y\m\7\k\r\j\5\5\q\n\x\3\c\9\2\o\k\l\k\i\u\3\1\0\t\c\u\i\l\8\e\k\6\5\4\p\7\r\l\x\e\n\p\b\o\t\p\i\h\9\8\j\7\3\g\5\k\l\g\o\3\w\s\7\j\k\n\z\w\2\i\a\q\a\f\9\2\7\a\r\k\3\s\u\1\b\0\k\g\8\l\5\v\8\y\l\1\x\b\j\x\s\7\d\6\y\j\s\0\e\q\u\u\7\j\q\5\9\f\l\n\f\r\w\u\h\d\s\a\v\6\l\d\h\2\r\e\i\4\2\a\b\h\r\v\b\2\p\5\j\x\q\6\v\r\p\1\h\o\q\t\z\q\j\v\7\4\s\l\r\n\b\8\s\3\k\5\9\7\w\c\n\w\q\b\7\z\j\0\g\7\j\5\m\w\u\q\u\z\b\u\p\4\i ]] 00:08:15.992 00:08:15.992 real 0m11.823s 00:08:15.992 user 0m9.337s 00:08:15.992 sys 0m1.493s 00:08:15.992 ************************************ 00:08:15.992 END TEST dd_flags_misc_forced_aio 00:08:15.992 ************************************ 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:15.992 ************************************ 00:08:15.992 END TEST spdk_dd_posix 00:08:15.992 ************************************ 00:08:15.992 00:08:15.992 real 0m50.322s 00:08:15.992 user 0m38.090s 00:08:15.992 sys 0m14.230s 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.992 05:25:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 05:25:56 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:15.992 05:25:56 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.992 05:25:56 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.992 05:25:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:15.992 ************************************ 00:08:15.992 START TEST spdk_dd_malloc 00:08:15.992 ************************************ 00:08:15.992 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:16.252 * Looking for test storage... 00:08:16.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.252 --rc genhtml_branch_coverage=1 00:08:16.252 --rc genhtml_function_coverage=1 00:08:16.252 --rc genhtml_legend=1 00:08:16.252 --rc geninfo_all_blocks=1 00:08:16.252 --rc geninfo_unexecuted_blocks=1 00:08:16.252 00:08:16.252 ' 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.252 --rc genhtml_branch_coverage=1 00:08:16.252 --rc genhtml_function_coverage=1 00:08:16.252 --rc genhtml_legend=1 00:08:16.252 --rc geninfo_all_blocks=1 00:08:16.252 --rc geninfo_unexecuted_blocks=1 00:08:16.252 00:08:16.252 ' 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.252 --rc genhtml_branch_coverage=1 00:08:16.252 --rc genhtml_function_coverage=1 00:08:16.252 --rc genhtml_legend=1 00:08:16.252 --rc geninfo_all_blocks=1 00:08:16.252 --rc geninfo_unexecuted_blocks=1 00:08:16.252 00:08:16.252 ' 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.252 --rc genhtml_branch_coverage=1 00:08:16.252 --rc genhtml_function_coverage=1 00:08:16.252 --rc genhtml_legend=1 00:08:16.252 --rc geninfo_all_blocks=1 00:08:16.252 --rc geninfo_unexecuted_blocks=1 00:08:16.252 00:08:16.252 ' 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.252 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:16.253 ************************************ 00:08:16.253 START TEST dd_malloc_copy 00:08:16.253 ************************************ 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:16.253 05:25:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.253 { 00:08:16.253 "subsystems": [ 00:08:16.253 { 00:08:16.253 "subsystem": "bdev", 00:08:16.253 "config": [ 00:08:16.253 { 00:08:16.253 "params": { 00:08:16.253 "block_size": 512, 00:08:16.253 "num_blocks": 1048576, 00:08:16.253 "name": "malloc0" 00:08:16.253 }, 00:08:16.253 "method": "bdev_malloc_create" 00:08:16.253 }, 00:08:16.253 { 00:08:16.253 "params": { 00:08:16.253 "block_size": 512, 00:08:16.253 "num_blocks": 1048576, 00:08:16.253 "name": "malloc1" 00:08:16.253 }, 00:08:16.253 "method": "bdev_malloc_create" 00:08:16.253 }, 00:08:16.253 { 00:08:16.253 "method": "bdev_wait_for_examine" 00:08:16.253 } 00:08:16.253 ] 00:08:16.253 } 00:08:16.253 ] 00:08:16.253 } 00:08:16.512 [2024-12-16 05:25:56.541137] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:16.512 [2024-12-16 05:25:56.541300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64368 ] 00:08:16.512 [2024-12-16 05:25:56.718585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.771 [2024-12-16 05:25:56.805529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.771 [2024-12-16 05:25:56.954073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.675  [2024-12-16T05:26:00.310Z] Copying: 180/512 [MB] (180 MBps) [2024-12-16T05:26:00.884Z] Copying: 354/512 [MB] (174 MBps) [2024-12-16T05:26:04.171Z] Copying: 512/512 [MB] (average 176 MBps) 00:08:23.912 00:08:23.912 05:26:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:23.912 05:26:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:23.912 05:26:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:23.912 05:26:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:23.912 { 00:08:23.912 "subsystems": [ 00:08:23.912 { 00:08:23.912 "subsystem": "bdev", 00:08:23.912 "config": [ 00:08:23.912 { 00:08:23.912 "params": { 00:08:23.912 "block_size": 512, 00:08:23.912 "num_blocks": 1048576, 00:08:23.912 "name": "malloc0" 00:08:23.912 }, 00:08:23.912 "method": "bdev_malloc_create" 00:08:23.912 }, 00:08:23.912 { 00:08:23.912 "params": { 00:08:23.912 "block_size": 512, 00:08:23.912 "num_blocks": 1048576, 00:08:23.912 "name": "malloc1" 00:08:23.912 }, 00:08:23.912 "method": "bdev_malloc_create" 00:08:23.912 }, 00:08:23.912 { 00:08:23.912 "method": "bdev_wait_for_examine" 00:08:23.912 } 00:08:23.912 ] 00:08:23.912 } 00:08:23.912 ] 00:08:23.912 } 00:08:23.912 [2024-12-16 05:26:03.754300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:23.912 [2024-12-16 05:26:03.754669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64455 ] 00:08:23.912 [2024-12-16 05:26:03.933830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.912 [2024-12-16 05:26:04.021473] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.171 [2024-12-16 05:26:04.181348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.072  [2024-12-16T05:26:07.267Z] Copying: 176/512 [MB] (176 MBps) [2024-12-16T05:26:08.202Z] Copying: 358/512 [MB] (181 MBps) [2024-12-16T05:26:11.489Z] Copying: 512/512 [MB] (average 181 MBps) 00:08:31.230 00:08:31.230 ************************************ 00:08:31.230 END TEST dd_malloc_copy 00:08:31.230 ************************************ 00:08:31.230 00:08:31.230 real 0m14.447s 00:08:31.230 user 0m13.465s 00:08:31.230 sys 0m0.789s 00:08:31.230 05:26:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.230 05:26:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:31.230 ************************************ 00:08:31.231 END TEST spdk_dd_malloc 00:08:31.231 ************************************ 00:08:31.231 00:08:31.231 real 0m14.691s 00:08:31.231 user 0m13.613s 00:08:31.231 sys 0m0.889s 00:08:31.231 05:26:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.231 05:26:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:31.231 05:26:10 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:31.231 05:26:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:31.231 05:26:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.231 05:26:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:31.231 ************************************ 00:08:31.231 START TEST spdk_dd_bdev_to_bdev 00:08:31.231 ************************************ 00:08:31.231 05:26:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:31.231 * Looking for test storage... 00:08:31.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.231 --rc genhtml_branch_coverage=1 00:08:31.231 --rc genhtml_function_coverage=1 00:08:31.231 --rc genhtml_legend=1 00:08:31.231 --rc geninfo_all_blocks=1 00:08:31.231 --rc geninfo_unexecuted_blocks=1 00:08:31.231 00:08:31.231 ' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.231 --rc genhtml_branch_coverage=1 00:08:31.231 --rc genhtml_function_coverage=1 00:08:31.231 --rc genhtml_legend=1 00:08:31.231 --rc geninfo_all_blocks=1 00:08:31.231 --rc geninfo_unexecuted_blocks=1 00:08:31.231 00:08:31.231 ' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.231 --rc genhtml_branch_coverage=1 00:08:31.231 --rc genhtml_function_coverage=1 00:08:31.231 --rc genhtml_legend=1 00:08:31.231 --rc geninfo_all_blocks=1 00:08:31.231 --rc geninfo_unexecuted_blocks=1 00:08:31.231 00:08:31.231 ' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.231 --rc genhtml_branch_coverage=1 00:08:31.231 --rc genhtml_function_coverage=1 00:08:31.231 --rc genhtml_legend=1 00:08:31.231 --rc geninfo_all_blocks=1 00:08:31.231 --rc geninfo_unexecuted_blocks=1 00:08:31.231 00:08:31.231 ' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:31.231 ************************************ 00:08:31.231 START TEST dd_inflate_file 00:08:31.231 ************************************ 00:08:31.231 05:26:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:31.231 [2024-12-16 05:26:11.292647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:31.232 [2024-12-16 05:26:11.293076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64610 ] 00:08:31.232 [2024-12-16 05:26:11.479266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.490 [2024-12-16 05:26:11.603677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.749 [2024-12-16 05:26:11.794904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.749  [2024-12-16T05:26:13.006Z] Copying: 64/64 [MB] (average 1641 MBps) 00:08:32.747 00:08:32.747 00:08:32.747 real 0m1.697s 00:08:32.747 user 0m1.392s 00:08:32.747 sys 0m0.980s 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:32.747 ************************************ 00:08:32.747 END TEST dd_inflate_file 00:08:32.747 ************************************ 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:32.747 ************************************ 00:08:32.747 START TEST dd_copy_to_out_bdev 00:08:32.747 ************************************ 00:08:32.747 05:26:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:32.747 { 00:08:32.747 "subsystems": [ 00:08:32.747 { 00:08:32.747 "subsystem": "bdev", 00:08:32.747 "config": [ 00:08:32.747 { 00:08:32.747 "params": { 00:08:32.747 "trtype": "pcie", 00:08:32.747 "traddr": "0000:00:10.0", 00:08:32.747 "name": "Nvme0" 00:08:32.747 }, 00:08:32.747 "method": "bdev_nvme_attach_controller" 00:08:32.747 }, 00:08:32.747 { 00:08:32.747 "params": { 00:08:32.747 "trtype": "pcie", 00:08:32.747 "traddr": "0000:00:11.0", 00:08:32.747 "name": "Nvme1" 00:08:32.747 }, 00:08:32.747 "method": "bdev_nvme_attach_controller" 00:08:32.747 }, 00:08:32.747 { 00:08:32.747 "method": "bdev_wait_for_examine" 00:08:32.747 } 00:08:32.747 ] 00:08:32.747 } 00:08:32.747 ] 00:08:32.747 } 00:08:33.007 [2024-12-16 05:26:13.033936] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:33.007 [2024-12-16 05:26:13.034109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64656 ] 00:08:33.007 [2024-12-16 05:26:13.201264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.266 [2024-12-16 05:26:13.290001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.266 [2024-12-16 05:26:13.441410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.644  [2024-12-16T05:26:15.162Z] Copying: 46/64 [MB] (46 MBps) [2024-12-16T05:26:16.099Z] Copying: 64/64 [MB] (average 46 MBps) 00:08:35.840 00:08:35.840 ************************************ 00:08:35.840 END TEST dd_copy_to_out_bdev 00:08:35.840 ************************************ 00:08:35.840 00:08:35.840 real 0m3.025s 00:08:35.840 user 0m2.761s 00:08:35.840 sys 0m2.253s 00:08:35.840 05:26:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.840 05:26:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:35.840 ************************************ 00:08:35.840 START TEST dd_offset_magic 00:08:35.840 ************************************ 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:35.840 05:26:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:35.840 { 00:08:35.840 "subsystems": [ 00:08:35.840 { 00:08:35.840 "subsystem": "bdev", 00:08:35.840 "config": [ 00:08:35.840 { 00:08:35.840 "params": { 00:08:35.840 "trtype": "pcie", 00:08:35.840 "traddr": "0000:00:10.0", 00:08:35.840 "name": "Nvme0" 00:08:35.840 }, 00:08:35.840 "method": "bdev_nvme_attach_controller" 00:08:35.840 }, 00:08:35.840 { 00:08:35.840 "params": { 00:08:35.840 "trtype": "pcie", 00:08:35.840 "traddr": "0000:00:11.0", 00:08:35.840 "name": "Nvme1" 00:08:35.840 }, 00:08:35.840 "method": "bdev_nvme_attach_controller" 00:08:35.840 }, 00:08:35.840 { 00:08:35.840 "method": "bdev_wait_for_examine" 00:08:35.840 } 00:08:35.840 ] 00:08:35.840 } 00:08:35.840 ] 00:08:35.840 } 00:08:36.100 [2024-12-16 05:26:16.134374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:36.100 [2024-12-16 05:26:16.134764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64713 ] 00:08:36.100 [2024-12-16 05:26:16.315195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.359 [2024-12-16 05:26:16.409782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.359 [2024-12-16 05:26:16.564740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.618  [2024-12-16T05:26:17.813Z] Copying: 65/65 [MB] (average 970 MBps) 00:08:37.554 00:08:37.554 05:26:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:37.554 05:26:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:37.554 05:26:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:37.554 05:26:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:37.554 { 00:08:37.554 "subsystems": [ 00:08:37.554 { 00:08:37.554 "subsystem": "bdev", 00:08:37.554 "config": [ 00:08:37.554 { 00:08:37.554 "params": { 00:08:37.554 "trtype": "pcie", 00:08:37.554 "traddr": "0000:00:10.0", 00:08:37.554 "name": "Nvme0" 00:08:37.554 }, 00:08:37.554 "method": "bdev_nvme_attach_controller" 00:08:37.554 }, 00:08:37.554 { 00:08:37.554 "params": { 00:08:37.554 "trtype": "pcie", 00:08:37.554 "traddr": "0000:00:11.0", 00:08:37.554 "name": "Nvme1" 00:08:37.554 }, 00:08:37.554 "method": "bdev_nvme_attach_controller" 00:08:37.554 }, 00:08:37.554 { 00:08:37.554 "method": "bdev_wait_for_examine" 00:08:37.554 } 00:08:37.554 ] 00:08:37.554 } 00:08:37.554 ] 00:08:37.554 } 00:08:37.814 [2024-12-16 05:26:17.817836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:37.814 [2024-12-16 05:26:17.818017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64739 ] 00:08:37.814 [2024-12-16 05:26:17.991589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.072 [2024-12-16 05:26:18.079418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.072 [2024-12-16 05:26:18.234714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.331  [2024-12-16T05:26:19.526Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:39.267 00:08:39.267 05:26:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:39.267 05:26:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:39.267 05:26:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:39.267 05:26:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:39.267 05:26:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:39.267 05:26:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:39.267 05:26:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:39.267 { 00:08:39.267 "subsystems": [ 00:08:39.267 { 00:08:39.267 "subsystem": "bdev", 00:08:39.267 "config": [ 00:08:39.267 { 00:08:39.267 "params": { 00:08:39.267 "trtype": "pcie", 00:08:39.267 "traddr": "0000:00:10.0", 00:08:39.267 "name": "Nvme0" 00:08:39.267 }, 00:08:39.267 "method": "bdev_nvme_attach_controller" 00:08:39.267 }, 00:08:39.267 { 00:08:39.267 "params": { 00:08:39.267 "trtype": "pcie", 00:08:39.267 "traddr": "0000:00:11.0", 00:08:39.267 "name": "Nvme1" 00:08:39.267 }, 00:08:39.267 "method": "bdev_nvme_attach_controller" 00:08:39.267 }, 00:08:39.267 { 00:08:39.267 "method": "bdev_wait_for_examine" 00:08:39.267 } 00:08:39.268 ] 00:08:39.268 } 00:08:39.268 ] 00:08:39.268 } 00:08:39.268 [2024-12-16 05:26:19.458751] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:39.268 [2024-12-16 05:26:19.459346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64768 ] 00:08:39.526 [2024-12-16 05:26:19.630935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.526 [2024-12-16 05:26:19.712992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.785 [2024-12-16 05:26:19.866467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.044  [2024-12-16T05:26:21.239Z] Copying: 65/65 [MB] (average 1048 MBps) 00:08:40.980 00:08:40.980 05:26:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:40.980 05:26:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:40.980 05:26:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:40.980 05:26:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:40.980 { 00:08:40.980 "subsystems": [ 00:08:40.981 { 00:08:40.981 "subsystem": "bdev", 00:08:40.981 "config": [ 00:08:40.981 { 00:08:40.981 "params": { 00:08:40.981 "trtype": "pcie", 00:08:40.981 "traddr": "0000:00:10.0", 00:08:40.981 "name": "Nvme0" 00:08:40.981 }, 00:08:40.981 "method": "bdev_nvme_attach_controller" 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "params": { 00:08:40.981 "trtype": "pcie", 00:08:40.981 "traddr": "0000:00:11.0", 00:08:40.981 "name": "Nvme1" 00:08:40.981 }, 00:08:40.981 "method": "bdev_nvme_attach_controller" 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "method": "bdev_wait_for_examine" 00:08:40.981 } 00:08:40.981 ] 00:08:40.981 } 00:08:40.981 ] 00:08:40.981 } 00:08:40.981 [2024-12-16 05:26:21.020525] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:40.981 [2024-12-16 05:26:21.021090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64794 ] 00:08:40.981 [2024-12-16 05:26:21.200404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.240 [2024-12-16 05:26:21.285826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.240 [2024-12-16 05:26:21.431208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.499  [2024-12-16T05:26:22.695Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:42.436 00:08:42.437 ************************************ 00:08:42.437 END TEST dd_offset_magic 00:08:42.437 ************************************ 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:42.437 00:08:42.437 real 0m6.532s 00:08:42.437 user 0m5.479s 00:08:42.437 sys 0m2.200s 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:42.437 05:26:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:42.437 { 00:08:42.437 "subsystems": [ 00:08:42.437 { 00:08:42.437 "subsystem": "bdev", 00:08:42.437 "config": [ 00:08:42.437 { 00:08:42.437 "params": { 00:08:42.437 "trtype": "pcie", 00:08:42.437 "traddr": "0000:00:10.0", 00:08:42.437 "name": "Nvme0" 00:08:42.437 }, 00:08:42.437 "method": "bdev_nvme_attach_controller" 00:08:42.437 }, 00:08:42.437 { 00:08:42.437 "params": { 00:08:42.437 "trtype": "pcie", 00:08:42.437 "traddr": "0000:00:11.0", 00:08:42.437 "name": "Nvme1" 00:08:42.437 }, 00:08:42.437 "method": "bdev_nvme_attach_controller" 00:08:42.437 }, 00:08:42.437 { 00:08:42.437 "method": "bdev_wait_for_examine" 00:08:42.437 } 00:08:42.437 ] 00:08:42.437 } 00:08:42.437 ] 00:08:42.437 } 00:08:42.696 [2024-12-16 05:26:22.716916] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:42.696 [2024-12-16 05:26:22.717097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64847 ] 00:08:42.696 [2024-12-16 05:26:22.895818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.955 [2024-12-16 05:26:22.983014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.955 [2024-12-16 05:26:23.133482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.230  [2024-12-16T05:26:24.096Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:43.837 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:43.837 05:26:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:44.096 { 00:08:44.096 "subsystems": [ 00:08:44.096 { 00:08:44.096 "subsystem": "bdev", 00:08:44.096 "config": [ 00:08:44.096 { 00:08:44.096 "params": { 00:08:44.096 "trtype": "pcie", 00:08:44.096 "traddr": "0000:00:10.0", 00:08:44.096 "name": "Nvme0" 00:08:44.096 }, 00:08:44.096 "method": "bdev_nvme_attach_controller" 00:08:44.096 }, 00:08:44.096 { 00:08:44.096 "params": { 00:08:44.096 "trtype": "pcie", 00:08:44.096 "traddr": "0000:00:11.0", 00:08:44.096 "name": "Nvme1" 00:08:44.096 }, 00:08:44.096 "method": "bdev_nvme_attach_controller" 00:08:44.096 }, 00:08:44.096 { 00:08:44.096 "method": "bdev_wait_for_examine" 00:08:44.096 } 00:08:44.096 ] 00:08:44.096 } 00:08:44.096 ] 00:08:44.096 } 00:08:44.096 [2024-12-16 05:26:24.189291] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:44.096 [2024-12-16 05:26:24.189463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64880 ] 00:08:44.356 [2024-12-16 05:26:24.368712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.356 [2024-12-16 05:26:24.451634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.356 [2024-12-16 05:26:24.596129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.615  [2024-12-16T05:26:25.809Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:45.550 00:08:45.550 05:26:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:45.550 ************************************ 00:08:45.550 END TEST spdk_dd_bdev_to_bdev 00:08:45.550 ************************************ 00:08:45.550 00:08:45.550 real 0m14.771s 00:08:45.550 user 0m12.429s 00:08:45.550 sys 0m7.191s 00:08:45.550 05:26:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.550 05:26:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:45.550 05:26:25 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:45.550 05:26:25 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:45.550 05:26:25 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.550 05:26:25 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.550 05:26:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:45.550 ************************************ 00:08:45.550 START TEST spdk_dd_uring 00:08:45.550 ************************************ 00:08:45.550 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:45.810 * Looking for test storage... 00:08:45.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.810 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.811 --rc genhtml_branch_coverage=1 00:08:45.811 --rc genhtml_function_coverage=1 00:08:45.811 --rc genhtml_legend=1 00:08:45.811 --rc geninfo_all_blocks=1 00:08:45.811 --rc geninfo_unexecuted_blocks=1 00:08:45.811 00:08:45.811 ' 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.811 --rc genhtml_branch_coverage=1 00:08:45.811 --rc genhtml_function_coverage=1 00:08:45.811 --rc genhtml_legend=1 00:08:45.811 --rc geninfo_all_blocks=1 00:08:45.811 --rc geninfo_unexecuted_blocks=1 00:08:45.811 00:08:45.811 ' 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.811 --rc genhtml_branch_coverage=1 00:08:45.811 --rc genhtml_function_coverage=1 00:08:45.811 --rc genhtml_legend=1 00:08:45.811 --rc geninfo_all_blocks=1 00:08:45.811 --rc geninfo_unexecuted_blocks=1 00:08:45.811 00:08:45.811 ' 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.811 --rc genhtml_branch_coverage=1 00:08:45.811 --rc genhtml_function_coverage=1 00:08:45.811 --rc genhtml_legend=1 00:08:45.811 --rc geninfo_all_blocks=1 00:08:45.811 --rc geninfo_unexecuted_blocks=1 00:08:45.811 00:08:45.811 ' 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:45.811 ************************************ 00:08:45.811 START TEST dd_uring_copy 00:08:45.811 ************************************ 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:45.811 05:26:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:45.811 05:26:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:45.811 05:26:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:45.811 05:26:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=t5ia58d2q3uv04r469so03vrwnhmamczdp62ofr21n0y04vckfgo36tbijavjqionc5ieuzt5mv2dmage9ddkhl7innbvt2ywfcxspj7zd9tcowcr083v5ud9h475m6g5q0su4541kv1kduc0s66c9vvjdq0h973k19otu97j8itl50xeksz5dbwp5x0tlmnzp73hijvedoviesyiisufm568eqcub2v6tl9uu5cdmjrcjlvd23n3ruemnl5eav91wvn19j30x2nyslhykdw10bt4krgua88phk05lalllk2oorxclqd9hiqvomhfltfxlkwvtcuw09d25w5cpffoe00hf3o8p6hu9ci16f6ccg0j40uo82f6fpjf8v69b4nleed8nro9k9v5xpmbvsh6fyzo8cepryhkaaq1dm20paaxlly8wakr8kdw65gauzmzyw2wcreonp85h0oub3iomgpofq5vqabz1245h63gky2f7j6uwh86ylem79ikftef46ls73uj34alvj7sdb743ww0nhc1z8hsh48akbtwunbfvn9h38iousy4nn72p48m8m79tmhdlog394fb2yrjuplvjjcbsdutets8pglrmvohg9ngj8nq7zihb4m4k9u8i2e6ykye4slqvyvx0s4gw7mo3bn7wa90p3v0mfazn7mv2iewlc6hrn5tg3ld8hfc55jz67j7gox8s5qjkkc3gyl0fa1ldxr4aq4ak19spzv7w065yyx6zl6majffbqz9lhectdaaugrqfo9kmag3thc94jqim5lmpfgptz1t1ple5z3ojbqhrda2dmoqu25mtiberdtnz9er4bskej1zzvnu4qfh4uq9gxo8t86skwpnteld7uq7450mgoayjwa257kt1nveiwb7f35vjlwvx7e0a8klj1o1j3mpbyvnbliibuuruxwy9ilf6xkgbxz6okhko851ht2nufg3y5kzxop227r8x67u0gz1ejek261mburc9kzzt9dz2sfy5im 00:08:45.811 05:26:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo t5ia58d2q3uv04r469so03vrwnhmamczdp62ofr21n0y04vckfgo36tbijavjqionc5ieuzt5mv2dmage9ddkhl7innbvt2ywfcxspj7zd9tcowcr083v5ud9h475m6g5q0su4541kv1kduc0s66c9vvjdq0h973k19otu97j8itl50xeksz5dbwp5x0tlmnzp73hijvedoviesyiisufm568eqcub2v6tl9uu5cdmjrcjlvd23n3ruemnl5eav91wvn19j30x2nyslhykdw10bt4krgua88phk05lalllk2oorxclqd9hiqvomhfltfxlkwvtcuw09d25w5cpffoe00hf3o8p6hu9ci16f6ccg0j40uo82f6fpjf8v69b4nleed8nro9k9v5xpmbvsh6fyzo8cepryhkaaq1dm20paaxlly8wakr8kdw65gauzmzyw2wcreonp85h0oub3iomgpofq5vqabz1245h63gky2f7j6uwh86ylem79ikftef46ls73uj34alvj7sdb743ww0nhc1z8hsh48akbtwunbfvn9h38iousy4nn72p48m8m79tmhdlog394fb2yrjuplvjjcbsdutets8pglrmvohg9ngj8nq7zihb4m4k9u8i2e6ykye4slqvyvx0s4gw7mo3bn7wa90p3v0mfazn7mv2iewlc6hrn5tg3ld8hfc55jz67j7gox8s5qjkkc3gyl0fa1ldxr4aq4ak19spzv7w065yyx6zl6majffbqz9lhectdaaugrqfo9kmag3thc94jqim5lmpfgptz1t1ple5z3ojbqhrda2dmoqu25mtiberdtnz9er4bskej1zzvnu4qfh4uq9gxo8t86skwpnteld7uq7450mgoayjwa257kt1nveiwb7f35vjlwvx7e0a8klj1o1j3mpbyvnbliibuuruxwy9ilf6xkgbxz6okhko851ht2nufg3y5kzxop227r8x67u0gz1ejek261mburc9kzzt9dz2sfy5im 00:08:45.811 05:26:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:46.071 [2024-12-16 05:26:26.101200] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:46.071 [2024-12-16 05:26:26.101555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64963 ] 00:08:46.071 [2024-12-16 05:26:26.270512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.330 [2024-12-16 05:26:26.357961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.330 [2024-12-16 05:26:26.502845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.267  [2024-12-16T05:26:29.434Z] Copying: 511/511 [MB] (average 1684 MBps) 00:08:49.175 00:08:49.175 05:26:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:49.175 05:26:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:49.175 05:26:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:49.175 05:26:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:49.175 { 00:08:49.175 "subsystems": [ 00:08:49.175 { 00:08:49.175 "subsystem": "bdev", 00:08:49.175 "config": [ 00:08:49.175 { 00:08:49.175 "params": { 00:08:49.175 "block_size": 512, 00:08:49.175 "num_blocks": 1048576, 00:08:49.175 "name": "malloc0" 00:08:49.175 }, 00:08:49.176 "method": "bdev_malloc_create" 00:08:49.176 }, 00:08:49.176 { 00:08:49.176 "params": { 00:08:49.176 "filename": "/dev/zram1", 00:08:49.176 "name": "uring0" 00:08:49.176 }, 00:08:49.176 "method": "bdev_uring_create" 00:08:49.176 }, 00:08:49.176 { 00:08:49.176 "method": "bdev_wait_for_examine" 00:08:49.176 } 00:08:49.176 ] 00:08:49.176 } 00:08:49.176 ] 00:08:49.176 } 00:08:49.176 [2024-12-16 05:26:29.274353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:49.176 [2024-12-16 05:26:29.274532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64999 ] 00:08:49.435 [2024-12-16 05:26:29.441908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.435 [2024-12-16 05:26:29.526797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.435 [2024-12-16 05:26:29.672795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.341  [2024-12-16T05:26:32.537Z] Copying: 203/512 [MB] (203 MBps) [2024-12-16T05:26:32.797Z] Copying: 413/512 [MB] (210 MBps) [2024-12-16T05:26:34.704Z] Copying: 512/512 [MB] (average 206 MBps) 00:08:54.445 00:08:54.445 05:26:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:54.445 05:26:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:54.445 05:26:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:54.445 05:26:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:54.445 { 00:08:54.445 "subsystems": [ 00:08:54.445 { 00:08:54.445 "subsystem": "bdev", 00:08:54.445 "config": [ 00:08:54.445 { 00:08:54.445 "params": { 00:08:54.445 "block_size": 512, 00:08:54.445 "num_blocks": 1048576, 00:08:54.445 "name": "malloc0" 00:08:54.445 }, 00:08:54.445 "method": "bdev_malloc_create" 00:08:54.445 }, 00:08:54.445 { 00:08:54.445 "params": { 00:08:54.445 "filename": "/dev/zram1", 00:08:54.445 "name": "uring0" 00:08:54.445 }, 00:08:54.445 "method": "bdev_uring_create" 00:08:54.445 }, 00:08:54.445 { 00:08:54.445 "method": "bdev_wait_for_examine" 00:08:54.445 } 00:08:54.445 ] 00:08:54.445 } 00:08:54.445 ] 00:08:54.445 } 00:08:54.445 [2024-12-16 05:26:34.670918] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:54.445 [2024-12-16 05:26:34.671058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65072 ] 00:08:54.705 [2024-12-16 05:26:34.837660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.705 [2024-12-16 05:26:34.922900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.964 [2024-12-16 05:26:35.070474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.367  [2024-12-16T05:26:38.004Z] Copying: 149/512 [MB] (149 MBps) [2024-12-16T05:26:38.941Z] Copying: 285/512 [MB] (136 MBps) [2024-12-16T05:26:39.200Z] Copying: 436/512 [MB] (150 MBps) [2024-12-16T05:26:41.105Z] Copying: 512/512 [MB] (average 142 MBps) 00:09:00.846 00:09:00.846 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:00.846 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ t5ia58d2q3uv04r469so03vrwnhmamczdp62ofr21n0y04vckfgo36tbijavjqionc5ieuzt5mv2dmage9ddkhl7innbvt2ywfcxspj7zd9tcowcr083v5ud9h475m6g5q0su4541kv1kduc0s66c9vvjdq0h973k19otu97j8itl50xeksz5dbwp5x0tlmnzp73hijvedoviesyiisufm568eqcub2v6tl9uu5cdmjrcjlvd23n3ruemnl5eav91wvn19j30x2nyslhykdw10bt4krgua88phk05lalllk2oorxclqd9hiqvomhfltfxlkwvtcuw09d25w5cpffoe00hf3o8p6hu9ci16f6ccg0j40uo82f6fpjf8v69b4nleed8nro9k9v5xpmbvsh6fyzo8cepryhkaaq1dm20paaxlly8wakr8kdw65gauzmzyw2wcreonp85h0oub3iomgpofq5vqabz1245h63gky2f7j6uwh86ylem79ikftef46ls73uj34alvj7sdb743ww0nhc1z8hsh48akbtwunbfvn9h38iousy4nn72p48m8m79tmhdlog394fb2yrjuplvjjcbsdutets8pglrmvohg9ngj8nq7zihb4m4k9u8i2e6ykye4slqvyvx0s4gw7mo3bn7wa90p3v0mfazn7mv2iewlc6hrn5tg3ld8hfc55jz67j7gox8s5qjkkc3gyl0fa1ldxr4aq4ak19spzv7w065yyx6zl6majffbqz9lhectdaaugrqfo9kmag3thc94jqim5lmpfgptz1t1ple5z3ojbqhrda2dmoqu25mtiberdtnz9er4bskej1zzvnu4qfh4uq9gxo8t86skwpnteld7uq7450mgoayjwa257kt1nveiwb7f35vjlwvx7e0a8klj1o1j3mpbyvnbliibuuruxwy9ilf6xkgbxz6okhko851ht2nufg3y5kzxop227r8x67u0gz1ejek261mburc9kzzt9dz2sfy5im == \t\5\i\a\5\8\d\2\q\3\u\v\0\4\r\4\6\9\s\o\0\3\v\r\w\n\h\m\a\m\c\z\d\p\6\2\o\f\r\2\1\n\0\y\0\4\v\c\k\f\g\o\3\6\t\b\i\j\a\v\j\q\i\o\n\c\5\i\e\u\z\t\5\m\v\2\d\m\a\g\e\9\d\d\k\h\l\7\i\n\n\b\v\t\2\y\w\f\c\x\s\p\j\7\z\d\9\t\c\o\w\c\r\0\8\3\v\5\u\d\9\h\4\7\5\m\6\g\5\q\0\s\u\4\5\4\1\k\v\1\k\d\u\c\0\s\6\6\c\9\v\v\j\d\q\0\h\9\7\3\k\1\9\o\t\u\9\7\j\8\i\t\l\5\0\x\e\k\s\z\5\d\b\w\p\5\x\0\t\l\m\n\z\p\7\3\h\i\j\v\e\d\o\v\i\e\s\y\i\i\s\u\f\m\5\6\8\e\q\c\u\b\2\v\6\t\l\9\u\u\5\c\d\m\j\r\c\j\l\v\d\2\3\n\3\r\u\e\m\n\l\5\e\a\v\9\1\w\v\n\1\9\j\3\0\x\2\n\y\s\l\h\y\k\d\w\1\0\b\t\4\k\r\g\u\a\8\8\p\h\k\0\5\l\a\l\l\l\k\2\o\o\r\x\c\l\q\d\9\h\i\q\v\o\m\h\f\l\t\f\x\l\k\w\v\t\c\u\w\0\9\d\2\5\w\5\c\p\f\f\o\e\0\0\h\f\3\o\8\p\6\h\u\9\c\i\1\6\f\6\c\c\g\0\j\4\0\u\o\8\2\f\6\f\p\j\f\8\v\6\9\b\4\n\l\e\e\d\8\n\r\o\9\k\9\v\5\x\p\m\b\v\s\h\6\f\y\z\o\8\c\e\p\r\y\h\k\a\a\q\1\d\m\2\0\p\a\a\x\l\l\y\8\w\a\k\r\8\k\d\w\6\5\g\a\u\z\m\z\y\w\2\w\c\r\e\o\n\p\8\5\h\0\o\u\b\3\i\o\m\g\p\o\f\q\5\v\q\a\b\z\1\2\4\5\h\6\3\g\k\y\2\f\7\j\6\u\w\h\8\6\y\l\e\m\7\9\i\k\f\t\e\f\4\6\l\s\7\3\u\j\3\4\a\l\v\j\7\s\d\b\7\4\3\w\w\0\n\h\c\1\z\8\h\s\h\4\8\a\k\b\t\w\u\n\b\f\v\n\9\h\3\8\i\o\u\s\y\4\n\n\7\2\p\4\8\m\8\m\7\9\t\m\h\d\l\o\g\3\9\4\f\b\2\y\r\j\u\p\l\v\j\j\c\b\s\d\u\t\e\t\s\8\p\g\l\r\m\v\o\h\g\9\n\g\j\8\n\q\7\z\i\h\b\4\m\4\k\9\u\8\i\2\e\6\y\k\y\e\4\s\l\q\v\y\v\x\0\s\4\g\w\7\m\o\3\b\n\7\w\a\9\0\p\3\v\0\m\f\a\z\n\7\m\v\2\i\e\w\l\c\6\h\r\n\5\t\g\3\l\d\8\h\f\c\5\5\j\z\6\7\j\7\g\o\x\8\s\5\q\j\k\k\c\3\g\y\l\0\f\a\1\l\d\x\r\4\a\q\4\a\k\1\9\s\p\z\v\7\w\0\6\5\y\y\x\6\z\l\6\m\a\j\f\f\b\q\z\9\l\h\e\c\t\d\a\a\u\g\r\q\f\o\9\k\m\a\g\3\t\h\c\9\4\j\q\i\m\5\l\m\p\f\g\p\t\z\1\t\1\p\l\e\5\z\3\o\j\b\q\h\r\d\a\2\d\m\o\q\u\2\5\m\t\i\b\e\r\d\t\n\z\9\e\r\4\b\s\k\e\j\1\z\z\v\n\u\4\q\f\h\4\u\q\9\g\x\o\8\t\8\6\s\k\w\p\n\t\e\l\d\7\u\q\7\4\5\0\m\g\o\a\y\j\w\a\2\5\7\k\t\1\n\v\e\i\w\b\7\f\3\5\v\j\l\w\v\x\7\e\0\a\8\k\l\j\1\o\1\j\3\m\p\b\y\v\n\b\l\i\i\b\u\u\r\u\x\w\y\9\i\l\f\6\x\k\g\b\x\z\6\o\k\h\k\o\8\5\1\h\t\2\n\u\f\g\3\y\5\k\z\x\o\p\2\2\7\r\8\x\6\7\u\0\g\z\1\e\j\e\k\2\6\1\m\b\u\r\c\9\k\z\z\t\9\d\z\2\s\f\y\5\i\m ]] 00:09:00.846 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:00.847 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ t5ia58d2q3uv04r469so03vrwnhmamczdp62ofr21n0y04vckfgo36tbijavjqionc5ieuzt5mv2dmage9ddkhl7innbvt2ywfcxspj7zd9tcowcr083v5ud9h475m6g5q0su4541kv1kduc0s66c9vvjdq0h973k19otu97j8itl50xeksz5dbwp5x0tlmnzp73hijvedoviesyiisufm568eqcub2v6tl9uu5cdmjrcjlvd23n3ruemnl5eav91wvn19j30x2nyslhykdw10bt4krgua88phk05lalllk2oorxclqd9hiqvomhfltfxlkwvtcuw09d25w5cpffoe00hf3o8p6hu9ci16f6ccg0j40uo82f6fpjf8v69b4nleed8nro9k9v5xpmbvsh6fyzo8cepryhkaaq1dm20paaxlly8wakr8kdw65gauzmzyw2wcreonp85h0oub3iomgpofq5vqabz1245h63gky2f7j6uwh86ylem79ikftef46ls73uj34alvj7sdb743ww0nhc1z8hsh48akbtwunbfvn9h38iousy4nn72p48m8m79tmhdlog394fb2yrjuplvjjcbsdutets8pglrmvohg9ngj8nq7zihb4m4k9u8i2e6ykye4slqvyvx0s4gw7mo3bn7wa90p3v0mfazn7mv2iewlc6hrn5tg3ld8hfc55jz67j7gox8s5qjkkc3gyl0fa1ldxr4aq4ak19spzv7w065yyx6zl6majffbqz9lhectdaaugrqfo9kmag3thc94jqim5lmpfgptz1t1ple5z3ojbqhrda2dmoqu25mtiberdtnz9er4bskej1zzvnu4qfh4uq9gxo8t86skwpnteld7uq7450mgoayjwa257kt1nveiwb7f35vjlwvx7e0a8klj1o1j3mpbyvnbliibuuruxwy9ilf6xkgbxz6okhko851ht2nufg3y5kzxop227r8x67u0gz1ejek261mburc9kzzt9dz2sfy5im == \t\5\i\a\5\8\d\2\q\3\u\v\0\4\r\4\6\9\s\o\0\3\v\r\w\n\h\m\a\m\c\z\d\p\6\2\o\f\r\2\1\n\0\y\0\4\v\c\k\f\g\o\3\6\t\b\i\j\a\v\j\q\i\o\n\c\5\i\e\u\z\t\5\m\v\2\d\m\a\g\e\9\d\d\k\h\l\7\i\n\n\b\v\t\2\y\w\f\c\x\s\p\j\7\z\d\9\t\c\o\w\c\r\0\8\3\v\5\u\d\9\h\4\7\5\m\6\g\5\q\0\s\u\4\5\4\1\k\v\1\k\d\u\c\0\s\6\6\c\9\v\v\j\d\q\0\h\9\7\3\k\1\9\o\t\u\9\7\j\8\i\t\l\5\0\x\e\k\s\z\5\d\b\w\p\5\x\0\t\l\m\n\z\p\7\3\h\i\j\v\e\d\o\v\i\e\s\y\i\i\s\u\f\m\5\6\8\e\q\c\u\b\2\v\6\t\l\9\u\u\5\c\d\m\j\r\c\j\l\v\d\2\3\n\3\r\u\e\m\n\l\5\e\a\v\9\1\w\v\n\1\9\j\3\0\x\2\n\y\s\l\h\y\k\d\w\1\0\b\t\4\k\r\g\u\a\8\8\p\h\k\0\5\l\a\l\l\l\k\2\o\o\r\x\c\l\q\d\9\h\i\q\v\o\m\h\f\l\t\f\x\l\k\w\v\t\c\u\w\0\9\d\2\5\w\5\c\p\f\f\o\e\0\0\h\f\3\o\8\p\6\h\u\9\c\i\1\6\f\6\c\c\g\0\j\4\0\u\o\8\2\f\6\f\p\j\f\8\v\6\9\b\4\n\l\e\e\d\8\n\r\o\9\k\9\v\5\x\p\m\b\v\s\h\6\f\y\z\o\8\c\e\p\r\y\h\k\a\a\q\1\d\m\2\0\p\a\a\x\l\l\y\8\w\a\k\r\8\k\d\w\6\5\g\a\u\z\m\z\y\w\2\w\c\r\e\o\n\p\8\5\h\0\o\u\b\3\i\o\m\g\p\o\f\q\5\v\q\a\b\z\1\2\4\5\h\6\3\g\k\y\2\f\7\j\6\u\w\h\8\6\y\l\e\m\7\9\i\k\f\t\e\f\4\6\l\s\7\3\u\j\3\4\a\l\v\j\7\s\d\b\7\4\3\w\w\0\n\h\c\1\z\8\h\s\h\4\8\a\k\b\t\w\u\n\b\f\v\n\9\h\3\8\i\o\u\s\y\4\n\n\7\2\p\4\8\m\8\m\7\9\t\m\h\d\l\o\g\3\9\4\f\b\2\y\r\j\u\p\l\v\j\j\c\b\s\d\u\t\e\t\s\8\p\g\l\r\m\v\o\h\g\9\n\g\j\8\n\q\7\z\i\h\b\4\m\4\k\9\u\8\i\2\e\6\y\k\y\e\4\s\l\q\v\y\v\x\0\s\4\g\w\7\m\o\3\b\n\7\w\a\9\0\p\3\v\0\m\f\a\z\n\7\m\v\2\i\e\w\l\c\6\h\r\n\5\t\g\3\l\d\8\h\f\c\5\5\j\z\6\7\j\7\g\o\x\8\s\5\q\j\k\k\c\3\g\y\l\0\f\a\1\l\d\x\r\4\a\q\4\a\k\1\9\s\p\z\v\7\w\0\6\5\y\y\x\6\z\l\6\m\a\j\f\f\b\q\z\9\l\h\e\c\t\d\a\a\u\g\r\q\f\o\9\k\m\a\g\3\t\h\c\9\4\j\q\i\m\5\l\m\p\f\g\p\t\z\1\t\1\p\l\e\5\z\3\o\j\b\q\h\r\d\a\2\d\m\o\q\u\2\5\m\t\i\b\e\r\d\t\n\z\9\e\r\4\b\s\k\e\j\1\z\z\v\n\u\4\q\f\h\4\u\q\9\g\x\o\8\t\8\6\s\k\w\p\n\t\e\l\d\7\u\q\7\4\5\0\m\g\o\a\y\j\w\a\2\5\7\k\t\1\n\v\e\i\w\b\7\f\3\5\v\j\l\w\v\x\7\e\0\a\8\k\l\j\1\o\1\j\3\m\p\b\y\v\n\b\l\i\i\b\u\u\r\u\x\w\y\9\i\l\f\6\x\k\g\b\x\z\6\o\k\h\k\o\8\5\1\h\t\2\n\u\f\g\3\y\5\k\z\x\o\p\2\2\7\r\8\x\6\7\u\0\g\z\1\e\j\e\k\2\6\1\m\b\u\r\c\9\k\z\z\t\9\d\z\2\s\f\y\5\i\m ]] 00:09:00.847 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:01.416 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:01.416 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:01.416 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:01.416 05:26:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:01.417 { 00:09:01.417 "subsystems": [ 00:09:01.417 { 00:09:01.417 "subsystem": "bdev", 00:09:01.417 "config": [ 00:09:01.417 { 00:09:01.417 "params": { 00:09:01.417 "block_size": 512, 00:09:01.417 "num_blocks": 1048576, 00:09:01.417 "name": "malloc0" 00:09:01.417 }, 00:09:01.417 "method": "bdev_malloc_create" 00:09:01.417 }, 00:09:01.417 { 00:09:01.417 "params": { 00:09:01.417 "filename": "/dev/zram1", 00:09:01.417 "name": "uring0" 00:09:01.417 }, 00:09:01.417 "method": "bdev_uring_create" 00:09:01.417 }, 00:09:01.417 { 00:09:01.417 "method": "bdev_wait_for_examine" 00:09:01.417 } 00:09:01.417 ] 00:09:01.417 } 00:09:01.417 ] 00:09:01.417 } 00:09:01.417 [2024-12-16 05:26:41.459671] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:01.417 [2024-12-16 05:26:41.459805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65175 ] 00:09:01.417 [2024-12-16 05:26:41.626326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.678 [2024-12-16 05:26:41.717407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.678 [2024-12-16 05:26:41.862948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.586  [2024-12-16T05:26:44.414Z] Copying: 140/512 [MB] (140 MBps) [2024-12-16T05:26:45.791Z] Copying: 275/512 [MB] (135 MBps) [2024-12-16T05:26:46.359Z] Copying: 415/512 [MB] (140 MBps) [2024-12-16T05:26:48.265Z] Copying: 512/512 [MB] (average 139 MBps) 00:09:08.006 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:08.006 05:26:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:08.006 { 00:09:08.006 "subsystems": [ 00:09:08.006 { 00:09:08.006 "subsystem": "bdev", 00:09:08.006 "config": [ 00:09:08.006 { 00:09:08.006 "params": { 00:09:08.006 "block_size": 512, 00:09:08.006 "num_blocks": 1048576, 00:09:08.006 "name": "malloc0" 00:09:08.006 }, 00:09:08.006 "method": "bdev_malloc_create" 00:09:08.006 }, 00:09:08.006 { 00:09:08.006 "params": { 00:09:08.006 "filename": "/dev/zram1", 00:09:08.006 "name": "uring0" 00:09:08.006 }, 00:09:08.006 "method": "bdev_uring_create" 00:09:08.006 }, 00:09:08.006 { 00:09:08.006 "params": { 00:09:08.006 "name": "uring0" 00:09:08.006 }, 00:09:08.006 "method": "bdev_uring_delete" 00:09:08.006 }, 00:09:08.006 { 00:09:08.006 "method": "bdev_wait_for_examine" 00:09:08.006 } 00:09:08.006 ] 00:09:08.006 } 00:09:08.006 ] 00:09:08.006 } 00:09:08.006 [2024-12-16 05:26:48.018854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:08.006 [2024-12-16 05:26:48.019012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65264 ] 00:09:08.006 [2024-12-16 05:26:48.187558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.265 [2024-12-16 05:26:48.282644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.265 [2024-12-16 05:26:48.442198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.928  [2024-12-16T05:26:51.090Z] Copying: 0/0 [B] (average 0 Bps) 00:09:10.831 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.831 05:26:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:10.831 { 00:09:10.831 "subsystems": [ 00:09:10.831 { 00:09:10.831 "subsystem": "bdev", 00:09:10.831 "config": [ 00:09:10.831 { 00:09:10.831 "params": { 00:09:10.831 "block_size": 512, 00:09:10.831 "num_blocks": 1048576, 00:09:10.831 "name": "malloc0" 00:09:10.831 }, 00:09:10.831 "method": "bdev_malloc_create" 00:09:10.831 }, 00:09:10.831 { 00:09:10.831 "params": { 00:09:10.831 "filename": "/dev/zram1", 00:09:10.831 "name": "uring0" 00:09:10.831 }, 00:09:10.831 "method": "bdev_uring_create" 00:09:10.831 }, 00:09:10.831 { 00:09:10.831 "params": { 00:09:10.831 "name": "uring0" 00:09:10.831 }, 00:09:10.831 "method": "bdev_uring_delete" 00:09:10.831 }, 00:09:10.832 { 00:09:10.832 "method": "bdev_wait_for_examine" 00:09:10.832 } 00:09:10.832 ] 00:09:10.832 } 00:09:10.832 ] 00:09:10.832 } 00:09:10.832 [2024-12-16 05:26:51.014011] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:10.832 [2024-12-16 05:26:51.014149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65316 ] 00:09:11.090 [2024-12-16 05:26:51.176448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.090 [2024-12-16 05:26:51.266598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.349 [2024-12-16 05:26:51.425796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.916 [2024-12-16 05:26:51.988794] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:11.916 [2024-12-16 05:26:51.988888] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:11.916 [2024-12-16 05:26:51.988905] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:09:11.916 [2024-12-16 05:26:51.988923] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:13.817 [2024-12-16 05:26:53.750029] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:13.817 05:26:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:14.076 00:09:14.076 real 0m28.262s 00:09:14.076 user 0m23.018s 00:09:14.076 sys 0m15.466s 00:09:14.076 05:26:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.076 ************************************ 00:09:14.076 END TEST dd_uring_copy 00:09:14.076 ************************************ 00:09:14.076 05:26:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:14.076 00:09:14.076 real 0m28.497s 00:09:14.076 user 0m23.149s 00:09:14.076 sys 0m15.575s 00:09:14.076 05:26:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.076 05:26:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:14.076 ************************************ 00:09:14.076 END TEST spdk_dd_uring 00:09:14.076 ************************************ 00:09:14.336 05:26:54 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:14.336 05:26:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.336 05:26:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.336 05:26:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:14.336 ************************************ 00:09:14.336 START TEST spdk_dd_sparse 00:09:14.336 ************************************ 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:14.336 * Looking for test storage... 00:09:14.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.336 --rc genhtml_branch_coverage=1 00:09:14.336 --rc genhtml_function_coverage=1 00:09:14.336 --rc genhtml_legend=1 00:09:14.336 --rc geninfo_all_blocks=1 00:09:14.336 --rc geninfo_unexecuted_blocks=1 00:09:14.336 00:09:14.336 ' 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.336 --rc genhtml_branch_coverage=1 00:09:14.336 --rc genhtml_function_coverage=1 00:09:14.336 --rc genhtml_legend=1 00:09:14.336 --rc geninfo_all_blocks=1 00:09:14.336 --rc geninfo_unexecuted_blocks=1 00:09:14.336 00:09:14.336 ' 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.336 --rc genhtml_branch_coverage=1 00:09:14.336 --rc genhtml_function_coverage=1 00:09:14.336 --rc genhtml_legend=1 00:09:14.336 --rc geninfo_all_blocks=1 00:09:14.336 --rc geninfo_unexecuted_blocks=1 00:09:14.336 00:09:14.336 ' 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.336 --rc genhtml_branch_coverage=1 00:09:14.336 --rc genhtml_function_coverage=1 00:09:14.336 --rc genhtml_legend=1 00:09:14.336 --rc geninfo_all_blocks=1 00:09:14.336 --rc geninfo_unexecuted_blocks=1 00:09:14.336 00:09:14.336 ' 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.336 05:26:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:14.337 1+0 records in 00:09:14.337 1+0 records out 00:09:14.337 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0083579 s, 502 MB/s 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:14.337 1+0 records in 00:09:14.337 1+0 records out 00:09:14.337 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00408054 s, 1.0 GB/s 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:14.337 1+0 records in 00:09:14.337 1+0 records out 00:09:14.337 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00617716 s, 679 MB/s 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:14.337 ************************************ 00:09:14.337 START TEST dd_sparse_file_to_file 00:09:14.337 ************************************ 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:14.337 05:26:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:14.595 { 00:09:14.595 "subsystems": [ 00:09:14.595 { 00:09:14.595 "subsystem": "bdev", 00:09:14.595 "config": [ 00:09:14.595 { 00:09:14.595 "params": { 00:09:14.595 "block_size": 4096, 00:09:14.595 "filename": "dd_sparse_aio_disk", 00:09:14.595 "name": "dd_aio" 00:09:14.595 }, 00:09:14.595 "method": "bdev_aio_create" 00:09:14.595 }, 00:09:14.596 { 00:09:14.596 "params": { 00:09:14.596 "lvs_name": "dd_lvstore", 00:09:14.596 "bdev_name": "dd_aio" 00:09:14.596 }, 00:09:14.596 "method": "bdev_lvol_create_lvstore" 00:09:14.596 }, 00:09:14.596 { 00:09:14.596 "method": "bdev_wait_for_examine" 00:09:14.596 } 00:09:14.596 ] 00:09:14.596 } 00:09:14.596 ] 00:09:14.596 } 00:09:14.596 [2024-12-16 05:26:54.679775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:14.596 [2024-12-16 05:26:54.680188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65433 ] 00:09:14.854 [2024-12-16 05:26:54.859722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.854 [2024-12-16 05:26:54.985465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.112 [2024-12-16 05:26:55.199645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.369  [2024-12-16T05:26:56.565Z] Copying: 12/36 [MB] (average 1200 MBps) 00:09:16.306 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:16.306 ************************************ 00:09:16.306 END TEST dd_sparse_file_to_file 00:09:16.306 ************************************ 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:16.306 00:09:16.306 real 0m1.827s 00:09:16.306 user 0m1.534s 00:09:16.306 sys 0m0.934s 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:16.306 ************************************ 00:09:16.306 START TEST dd_sparse_file_to_bdev 00:09:16.306 ************************************ 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:16.306 05:26:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:16.306 { 00:09:16.306 "subsystems": [ 00:09:16.306 { 00:09:16.306 "subsystem": "bdev", 00:09:16.306 "config": [ 00:09:16.306 { 00:09:16.306 "params": { 00:09:16.306 "block_size": 4096, 00:09:16.306 "filename": "dd_sparse_aio_disk", 00:09:16.306 "name": "dd_aio" 00:09:16.306 }, 00:09:16.306 "method": "bdev_aio_create" 00:09:16.306 }, 00:09:16.306 { 00:09:16.306 "params": { 00:09:16.306 "lvs_name": "dd_lvstore", 00:09:16.306 "lvol_name": "dd_lvol", 00:09:16.306 "size_in_mib": 36, 00:09:16.306 "thin_provision": true 00:09:16.306 }, 00:09:16.306 "method": "bdev_lvol_create" 00:09:16.306 }, 00:09:16.306 { 00:09:16.306 "method": "bdev_wait_for_examine" 00:09:16.306 } 00:09:16.306 ] 00:09:16.306 } 00:09:16.306 ] 00:09:16.306 } 00:09:16.566 [2024-12-16 05:26:56.571502] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:16.566 [2024-12-16 05:26:56.571752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65493 ] 00:09:16.566 [2024-12-16 05:26:56.751687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.825 [2024-12-16 05:26:56.845170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.825 [2024-12-16 05:26:56.999733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.084  [2024-12-16T05:26:58.280Z] Copying: 12/36 [MB] (average 571 MBps) 00:09:18.021 00:09:18.021 00:09:18.021 real 0m1.656s 00:09:18.021 user 0m1.367s 00:09:18.021 sys 0m0.922s 00:09:18.021 ************************************ 00:09:18.021 END TEST dd_sparse_file_to_bdev 00:09:18.021 ************************************ 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:18.021 ************************************ 00:09:18.021 START TEST dd_sparse_bdev_to_file 00:09:18.021 ************************************ 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:18.021 05:26:58 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:18.021 { 00:09:18.021 "subsystems": [ 00:09:18.021 { 00:09:18.021 "subsystem": "bdev", 00:09:18.021 "config": [ 00:09:18.021 { 00:09:18.021 "params": { 00:09:18.021 "block_size": 4096, 00:09:18.021 "filename": "dd_sparse_aio_disk", 00:09:18.021 "name": "dd_aio" 00:09:18.021 }, 00:09:18.021 "method": "bdev_aio_create" 00:09:18.021 }, 00:09:18.021 { 00:09:18.021 "method": "bdev_wait_for_examine" 00:09:18.021 } 00:09:18.021 ] 00:09:18.021 } 00:09:18.021 ] 00:09:18.021 } 00:09:18.281 [2024-12-16 05:26:58.288290] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:18.281 [2024-12-16 05:26:58.288671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65538 ] 00:09:18.281 [2024-12-16 05:26:58.471383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.539 [2024-12-16 05:26:58.564312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.539 [2024-12-16 05:26:58.723776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.799  [2024-12-16T05:26:59.994Z] Copying: 12/36 [MB] (average 1200 MBps) 00:09:19.735 00:09:19.735 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:19.735 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:19.735 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:19.735 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:19.735 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:19.735 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:19.735 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:19.736 00:09:19.736 real 0m1.639s 00:09:19.736 user 0m1.321s 00:09:19.736 sys 0m0.919s 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.736 ************************************ 00:09:19.736 END TEST dd_sparse_bdev_to_file 00:09:19.736 ************************************ 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:19.736 ************************************ 00:09:19.736 END TEST spdk_dd_sparse 00:09:19.736 ************************************ 00:09:19.736 00:09:19.736 real 0m5.529s 00:09:19.736 user 0m4.394s 00:09:19.736 sys 0m2.992s 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.736 05:26:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:19.736 05:26:59 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:19.736 05:26:59 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.736 05:26:59 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.736 05:26:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:19.736 ************************************ 00:09:19.736 START TEST spdk_dd_negative 00:09:19.736 ************************************ 00:09:19.736 05:26:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:20.003 * Looking for test storage... 00:09:20.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.003 --rc genhtml_branch_coverage=1 00:09:20.003 --rc genhtml_function_coverage=1 00:09:20.003 --rc genhtml_legend=1 00:09:20.003 --rc geninfo_all_blocks=1 00:09:20.003 --rc geninfo_unexecuted_blocks=1 00:09:20.003 00:09:20.003 ' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.003 --rc genhtml_branch_coverage=1 00:09:20.003 --rc genhtml_function_coverage=1 00:09:20.003 --rc genhtml_legend=1 00:09:20.003 --rc geninfo_all_blocks=1 00:09:20.003 --rc geninfo_unexecuted_blocks=1 00:09:20.003 00:09:20.003 ' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.003 --rc genhtml_branch_coverage=1 00:09:20.003 --rc genhtml_function_coverage=1 00:09:20.003 --rc genhtml_legend=1 00:09:20.003 --rc geninfo_all_blocks=1 00:09:20.003 --rc geninfo_unexecuted_blocks=1 00:09:20.003 00:09:20.003 ' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.003 --rc genhtml_branch_coverage=1 00:09:20.003 --rc genhtml_function_coverage=1 00:09:20.003 --rc genhtml_legend=1 00:09:20.003 --rc geninfo_all_blocks=1 00:09:20.003 --rc geninfo_unexecuted_blocks=1 00:09:20.003 00:09:20.003 ' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.003 ************************************ 00:09:20.003 START TEST dd_invalid_arguments 00:09:20.003 ************************************ 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.003 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:20.003 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:20.003 00:09:20.003 CPU options: 00:09:20.003 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:20.003 (like [0,1,10]) 00:09:20.003 --lcores lcore to CPU mapping list. The list is in the format: 00:09:20.003 [<,lcores[@CPUs]>...] 00:09:20.003 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:20.003 Within the group, '-' is used for range separator, 00:09:20.003 ',' is used for single number separator. 00:09:20.003 '( )' can be omitted for single element group, 00:09:20.003 '@' can be omitted if cpus and lcores have the same value 00:09:20.003 --disable-cpumask-locks Disable CPU core lock files. 00:09:20.003 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:20.003 pollers in the app support interrupt mode) 00:09:20.003 -p, --main-core main (primary) core for DPDK 00:09:20.003 00:09:20.003 Configuration options: 00:09:20.003 -c, --config, --json JSON config file 00:09:20.003 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:20.003 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:20.003 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:20.003 --rpcs-allowed comma-separated list of permitted RPCS 00:09:20.003 --json-ignore-init-errors don't exit on invalid config entry 00:09:20.003 00:09:20.003 Memory options: 00:09:20.003 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:20.003 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:20.003 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:20.003 -R, --huge-unlink unlink huge files after initialization 00:09:20.003 -n, --mem-channels number of memory channels used for DPDK 00:09:20.003 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:20.003 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:20.003 --no-huge run without using hugepages 00:09:20.003 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:20.003 -i, --shm-id shared memory ID (optional) 00:09:20.003 -g, --single-file-segments force creating just one hugetlbfs file 00:09:20.003 00:09:20.003 PCI options: 00:09:20.004 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:20.004 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:20.004 -u, --no-pci disable PCI access 00:09:20.004 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:20.004 00:09:20.004 Log options: 00:09:20.004 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:20.004 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:20.004 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:20.004 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:20.004 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:09:20.004 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:09:20.004 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:09:20.004 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:09:20.004 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:20.004 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:09:20.004 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:09:20.004 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:09:20.004 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:20.004 --silence-noticelog disable notice level logging to stderr 00:09:20.004 00:09:20.004 Trace options: 00:09:20.004 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:20.004 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:20.004 [2024-12-16 05:27:00.249107] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:09:20.296 setting 0 to disable trace (default 32768) 00:09:20.296 Tracepoints vary in size and can use more than one trace entry. 00:09:20.296 -e, --tpoint-group [:] 00:09:20.297 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:09:20.297 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:09:20.297 blob, bdev_raid, scheduler, all). 00:09:20.297 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:20.297 a tracepoint group. First tpoint inside a group can be enabled by 00:09:20.297 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:20.297 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:20.297 in /include/spdk_internal/trace_defs.h 00:09:20.297 00:09:20.297 Other options: 00:09:20.297 -h, --help show this usage 00:09:20.297 -v, --version print SPDK version 00:09:20.297 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:20.297 --env-context Opaque context for use of the env implementation 00:09:20.297 00:09:20.297 Application specific: 00:09:20.297 [--------- DD Options ---------] 00:09:20.297 --if Input file. Must specify either --if or --ib. 00:09:20.297 --ib Input bdev. Must specifier either --if or --ib 00:09:20.297 --of Output file. Must specify either --of or --ob. 00:09:20.297 --ob Output bdev. Must specify either --of or --ob. 00:09:20.297 --iflag Input file flags. 00:09:20.297 --oflag Output file flags. 00:09:20.297 --bs I/O unit size (default: 4096) 00:09:20.297 --qd Queue depth (default: 2) 00:09:20.297 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:20.297 --skip Skip this many I/O units at start of input. (default: 0) 00:09:20.297 --seek Skip this many I/O units at start of output. (default: 0) 00:09:20.297 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:20.297 --sparse Enable hole skipping in input target 00:09:20.297 Available iflag and oflag values: 00:09:20.297 append - append mode 00:09:20.297 direct - use direct I/O for data 00:09:20.297 directory - fail unless a directory 00:09:20.297 dsync - use synchronized I/O for data 00:09:20.297 noatime - do not update access time 00:09:20.297 noctty - do not assign controlling terminal from file 00:09:20.297 nofollow - do not follow symlinks 00:09:20.297 nonblock - use non-blocking I/O 00:09:20.297 sync - use synchronized I/O for data and metadata 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.297 ************************************ 00:09:20.297 END TEST dd_invalid_arguments 00:09:20.297 ************************************ 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.297 00:09:20.297 real 0m0.168s 00:09:20.297 user 0m0.080s 00:09:20.297 sys 0m0.085s 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.297 ************************************ 00:09:20.297 START TEST dd_double_input 00:09:20.297 ************************************ 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:20.297 [2024-12-16 05:27:00.449451] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.297 00:09:20.297 real 0m0.147s 00:09:20.297 user 0m0.076s 00:09:20.297 sys 0m0.070s 00:09:20.297 ************************************ 00:09:20.297 END TEST dd_double_input 00:09:20.297 ************************************ 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.297 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.557 ************************************ 00:09:20.557 START TEST dd_double_output 00:09:20.557 ************************************ 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:20.557 [2024-12-16 05:27:00.669947] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.557 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.557 ************************************ 00:09:20.557 END TEST dd_double_output 00:09:20.557 ************************************ 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.558 00:09:20.558 real 0m0.175s 00:09:20.558 user 0m0.092s 00:09:20.558 sys 0m0.081s 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.558 ************************************ 00:09:20.558 START TEST dd_no_input 00:09:20.558 ************************************ 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.558 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:20.817 [2024-12-16 05:27:00.891055] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:09:20.817 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:20.817 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.817 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.817 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.817 00:09:20.817 real 0m0.165s 00:09:20.817 user 0m0.091s 00:09:20.817 sys 0m0.072s 00:09:20.817 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.817 ************************************ 00:09:20.817 END TEST dd_no_input 00:09:20.817 ************************************ 00:09:20.817 05:27:00 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 05:27:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:20.818 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.818 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.818 05:27:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:20.818 ************************************ 00:09:20.818 START TEST dd_no_output 00:09:20.818 ************************************ 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:20.818 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:21.077 [2024-12-16 05:27:01.106627] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:21.077 00:09:21.077 real 0m0.166s 00:09:21.077 user 0m0.095s 00:09:21.077 sys 0m0.069s 00:09:21.077 ************************************ 00:09:21.077 END TEST dd_no_output 00:09:21.077 ************************************ 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:21.077 ************************************ 00:09:21.077 START TEST dd_wrong_blocksize 00:09:21.077 ************************************ 00:09:21.077 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.078 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:21.078 [2024-12-16 05:27:01.330063] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:09:21.337 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:21.337 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:21.337 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:21.337 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:21.337 00:09:21.337 real 0m0.169s 00:09:21.337 user 0m0.096s 00:09:21.337 sys 0m0.071s 00:09:21.338 ************************************ 00:09:21.338 END TEST dd_wrong_blocksize 00:09:21.338 ************************************ 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:21.338 ************************************ 00:09:21.338 START TEST dd_smaller_blocksize 00:09:21.338 ************************************ 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.338 05:27:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:21.338 [2024-12-16 05:27:01.553673] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:21.338 [2024-12-16 05:27:01.553866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65788 ] 00:09:21.598 [2024-12-16 05:27:01.742977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.857 [2024-12-16 05:27:01.874753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.857 [2024-12-16 05:27:02.065605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.427 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:22.686 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:22.686 [2024-12-16 05:27:02.759097] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:22.686 [2024-12-16 05:27:02.759432] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.255 [2024-12-16 05:27:03.440448] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.515 00:09:23.515 real 0m2.230s 00:09:23.515 user 0m1.463s 00:09:23.515 sys 0m0.648s 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.515 ************************************ 00:09:23.515 END TEST dd_smaller_blocksize 00:09:23.515 ************************************ 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:23.515 ************************************ 00:09:23.515 START TEST dd_invalid_count 00:09:23.515 ************************************ 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.515 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:23.774 [2024-12-16 05:27:03.838791] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.774 00:09:23.774 real 0m0.180s 00:09:23.774 user 0m0.098s 00:09:23.774 sys 0m0.079s 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:23.774 ************************************ 00:09:23.774 END TEST dd_invalid_count 00:09:23.774 ************************************ 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.774 05:27:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:23.774 ************************************ 00:09:23.774 START TEST dd_invalid_oflag 00:09:23.774 ************************************ 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.775 05:27:03 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:24.034 [2024-12-16 05:27:04.074757] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:24.034 ************************************ 00:09:24.034 END TEST dd_invalid_oflag 00:09:24.034 ************************************ 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:24.034 00:09:24.034 real 0m0.177s 00:09:24.034 user 0m0.104s 00:09:24.034 sys 0m0.070s 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:24.034 ************************************ 00:09:24.034 START TEST dd_invalid_iflag 00:09:24.034 ************************************ 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:24.034 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:24.294 [2024-12-16 05:27:04.310847] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:24.294 00:09:24.294 real 0m0.167s 00:09:24.294 user 0m0.081s 00:09:24.294 sys 0m0.084s 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.294 ************************************ 00:09:24.294 END TEST dd_invalid_iflag 00:09:24.294 ************************************ 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:24.294 ************************************ 00:09:24.294 START TEST dd_unknown_flag 00:09:24.294 ************************************ 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:24.294 05:27:04 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:24.294 [2024-12-16 05:27:04.531309] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:24.294 [2024-12-16 05:27:04.531783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65906 ] 00:09:24.554 [2024-12-16 05:27:04.710828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.554 [2024-12-16 05:27:04.806875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.814 [2024-12-16 05:27:04.976102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.814 [2024-12-16 05:27:05.067496] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:24.814 [2024-12-16 05:27:05.067577] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.814 [2024-12-16 05:27:05.067705] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:24.814 [2024-12-16 05:27:05.067747] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.814 [2024-12-16 05:27:05.068027] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:24.814 [2024-12-16 05:27:05.068074] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:24.814 [2024-12-16 05:27:05.068161] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:24.814 [2024-12-16 05:27:05.068189] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:25.751 [2024-12-16 05:27:05.639539] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.751 00:09:25.751 real 0m1.456s 00:09:25.751 user 0m1.151s 00:09:25.751 sys 0m0.209s 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.751 ************************************ 00:09:25.751 END TEST dd_unknown_flag 00:09:25.751 ************************************ 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:25.751 ************************************ 00:09:25.751 START TEST dd_invalid_json 00:09:25.751 ************************************ 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:25.751 05:27:05 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:26.011 [2024-12-16 05:27:06.029258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:26.011 [2024-12-16 05:27:06.029394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65945 ] 00:09:26.011 [2024-12-16 05:27:06.193711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.270 [2024-12-16 05:27:06.278665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.270 [2024-12-16 05:27:06.278769] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:26.270 [2024-12-16 05:27:06.278790] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:26.270 [2024-12-16 05:27:06.278804] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.270 [2024-12-16 05:27:06.278864] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:26.270 ************************************ 00:09:26.270 END TEST dd_invalid_json 00:09:26.270 ************************************ 00:09:26.270 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:26.270 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.270 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:26.270 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:26.270 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:26.270 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.270 00:09:26.270 real 0m0.565s 00:09:26.270 user 0m0.356s 00:09:26.270 sys 0m0.105s 00:09:26.270 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.271 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:26.530 ************************************ 00:09:26.530 START TEST dd_invalid_seek 00:09:26.530 ************************************ 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:26.530 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:26.531 05:27:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:26.531 { 00:09:26.531 "subsystems": [ 00:09:26.531 { 00:09:26.531 "subsystem": "bdev", 00:09:26.531 "config": [ 00:09:26.531 { 00:09:26.531 "params": { 00:09:26.531 "block_size": 512, 00:09:26.531 "num_blocks": 512, 00:09:26.531 "name": "malloc0" 00:09:26.531 }, 00:09:26.531 "method": "bdev_malloc_create" 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "params": { 00:09:26.531 "block_size": 512, 00:09:26.531 "num_blocks": 512, 00:09:26.531 "name": "malloc1" 00:09:26.531 }, 00:09:26.531 "method": "bdev_malloc_create" 00:09:26.531 }, 00:09:26.531 { 00:09:26.531 "method": "bdev_wait_for_examine" 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 } 00:09:26.531 ] 00:09:26.531 } 00:09:26.531 [2024-12-16 05:27:06.660579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:26.531 [2024-12-16 05:27:06.660772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65977 ] 00:09:26.791 [2024-12-16 05:27:06.840600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.791 [2024-12-16 05:27:06.951199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.050 [2024-12-16 05:27:07.133204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.050 [2024-12-16 05:27:07.254805] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:27.050 [2024-12-16 05:27:07.254896] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:27.618 [2024-12-16 05:27:07.857103] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:27.878 00:09:27.878 real 0m1.550s 00:09:27.878 user 0m1.280s 00:09:27.878 sys 0m0.225s 00:09:27.878 ************************************ 00:09:27.878 END TEST dd_invalid_seek 00:09:27.878 ************************************ 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.878 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:28.139 ************************************ 00:09:28.139 START TEST dd_invalid_skip 00:09:28.139 ************************************ 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:28.139 05:27:08 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:28.139 { 00:09:28.139 "subsystems": [ 00:09:28.139 { 00:09:28.139 "subsystem": "bdev", 00:09:28.139 "config": [ 00:09:28.139 { 00:09:28.139 "params": { 00:09:28.139 "block_size": 512, 00:09:28.139 "num_blocks": 512, 00:09:28.139 "name": "malloc0" 00:09:28.139 }, 00:09:28.139 "method": "bdev_malloc_create" 00:09:28.139 }, 00:09:28.139 { 00:09:28.139 "params": { 00:09:28.139 "block_size": 512, 00:09:28.139 "num_blocks": 512, 00:09:28.139 "name": "malloc1" 00:09:28.139 }, 00:09:28.139 "method": "bdev_malloc_create" 00:09:28.139 }, 00:09:28.139 { 00:09:28.139 "method": "bdev_wait_for_examine" 00:09:28.139 } 00:09:28.139 ] 00:09:28.139 } 00:09:28.139 ] 00:09:28.139 } 00:09:28.140 [2024-12-16 05:27:08.252311] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:28.140 [2024-12-16 05:27:08.252735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66028 ] 00:09:28.400 [2024-12-16 05:27:08.413228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.400 [2024-12-16 05:27:08.496632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.400 [2024-12-16 05:27:08.641792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.660 [2024-12-16 05:27:08.754555] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:28.660 [2024-12-16 05:27:08.754897] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:29.229 [2024-12-16 05:27:09.415787] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.489 00:09:29.489 real 0m1.512s 00:09:29.489 user 0m1.254s 00:09:29.489 sys 0m0.210s 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:29.489 ************************************ 00:09:29.489 END TEST dd_invalid_skip 00:09:29.489 ************************************ 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.489 ************************************ 00:09:29.489 START TEST dd_invalid_input_count 00:09:29.489 ************************************ 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.489 05:27:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:29.749 { 00:09:29.749 "subsystems": [ 00:09:29.749 { 00:09:29.749 "subsystem": "bdev", 00:09:29.749 "config": [ 00:09:29.749 { 00:09:29.749 "params": { 00:09:29.749 "block_size": 512, 00:09:29.749 "num_blocks": 512, 00:09:29.749 "name": "malloc0" 00:09:29.749 }, 00:09:29.749 "method": "bdev_malloc_create" 00:09:29.749 }, 00:09:29.749 { 00:09:29.749 "params": { 00:09:29.749 "block_size": 512, 00:09:29.749 "num_blocks": 512, 00:09:29.749 "name": "malloc1" 00:09:29.749 }, 00:09:29.749 "method": "bdev_malloc_create" 00:09:29.749 }, 00:09:29.749 { 00:09:29.749 "method": "bdev_wait_for_examine" 00:09:29.749 } 00:09:29.749 ] 00:09:29.749 } 00:09:29.749 ] 00:09:29.749 } 00:09:29.749 [2024-12-16 05:27:09.845244] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:29.749 [2024-12-16 05:27:09.845419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66068 ] 00:09:30.020 [2024-12-16 05:27:10.025092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.020 [2024-12-16 05:27:10.116184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.291 [2024-12-16 05:27:10.278110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.291 [2024-12-16 05:27:10.399806] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:30.291 [2024-12-16 05:27:10.400094] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:30.859 [2024-12-16 05:27:11.008126] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.119 00:09:31.119 real 0m1.520s 00:09:31.119 user 0m1.256s 00:09:31.119 sys 0m0.214s 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.119 ************************************ 00:09:31.119 END TEST dd_invalid_input_count 00:09:31.119 ************************************ 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 ************************************ 00:09:31.119 START TEST dd_invalid_output_count 00:09:31.119 ************************************ 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:31.119 05:27:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:31.119 { 00:09:31.119 "subsystems": [ 00:09:31.119 { 00:09:31.119 "subsystem": "bdev", 00:09:31.119 "config": [ 00:09:31.119 { 00:09:31.119 "params": { 00:09:31.119 "block_size": 512, 00:09:31.119 "num_blocks": 512, 00:09:31.119 "name": "malloc0" 00:09:31.119 }, 00:09:31.119 "method": "bdev_malloc_create" 00:09:31.119 }, 00:09:31.119 { 00:09:31.119 "method": "bdev_wait_for_examine" 00:09:31.119 } 00:09:31.119 ] 00:09:31.119 } 00:09:31.119 ] 00:09:31.119 } 00:09:31.379 [2024-12-16 05:27:11.416118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:31.379 [2024-12-16 05:27:11.416295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66114 ] 00:09:31.379 [2024-12-16 05:27:11.594946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.638 [2024-12-16 05:27:11.679061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.638 [2024-12-16 05:27:11.825118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.897 [2024-12-16 05:27:11.936560] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:31.898 [2024-12-16 05:27:11.936917] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.467 [2024-12-16 05:27:12.563511] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:32.727 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:32.727 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:32.727 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:32.727 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:32.728 00:09:32.728 real 0m1.498s 00:09:32.728 user 0m1.223s 00:09:32.728 sys 0m0.225s 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:32.728 ************************************ 00:09:32.728 END TEST dd_invalid_output_count 00:09:32.728 ************************************ 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:32.728 ************************************ 00:09:32.728 START TEST dd_bs_not_multiple 00:09:32.728 ************************************ 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.728 05:27:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:32.728 { 00:09:32.728 "subsystems": [ 00:09:32.728 { 00:09:32.728 "subsystem": "bdev", 00:09:32.728 "config": [ 00:09:32.728 { 00:09:32.728 "params": { 00:09:32.728 "block_size": 512, 00:09:32.728 "num_blocks": 512, 00:09:32.728 "name": "malloc0" 00:09:32.728 }, 00:09:32.728 "method": "bdev_malloc_create" 00:09:32.728 }, 00:09:32.728 { 00:09:32.728 "params": { 00:09:32.728 "block_size": 512, 00:09:32.728 "num_blocks": 512, 00:09:32.728 "name": "malloc1" 00:09:32.728 }, 00:09:32.728 "method": "bdev_malloc_create" 00:09:32.728 }, 00:09:32.728 { 00:09:32.728 "method": "bdev_wait_for_examine" 00:09:32.728 } 00:09:32.728 ] 00:09:32.728 } 00:09:32.728 ] 00:09:32.728 } 00:09:32.728 [2024-12-16 05:27:12.948187] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:32.728 [2024-12-16 05:27:12.948524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66157 ] 00:09:32.988 [2024-12-16 05:27:13.110536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.988 [2024-12-16 05:27:13.199513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.247 [2024-12-16 05:27:13.351189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.247 [2024-12-16 05:27:13.465302] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:33.247 [2024-12-16 05:27:13.465402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.184 [2024-12-16 05:27:14.089511] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.184 ************************************ 00:09:34.184 END TEST dd_bs_not_multiple 00:09:34.184 ************************************ 00:09:34.184 00:09:34.184 real 0m1.468s 00:09:34.184 user 0m1.229s 00:09:34.184 sys 0m0.197s 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:34.184 ************************************ 00:09:34.184 END TEST spdk_dd_negative 00:09:34.184 ************************************ 00:09:34.184 00:09:34.184 real 0m14.439s 00:09:34.184 user 0m10.435s 00:09:34.184 sys 0m3.332s 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.184 05:27:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:34.184 ************************************ 00:09:34.184 END TEST spdk_dd 00:09:34.184 ************************************ 00:09:34.184 00:09:34.184 real 2m48.105s 00:09:34.184 user 2m14.905s 00:09:34.184 sys 1m2.745s 00:09:34.184 05:27:14 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.184 05:27:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:34.442 05:27:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:34.442 05:27:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:34.442 05:27:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:34.442 05:27:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.442 05:27:14 -- common/autotest_common.sh@10 -- # set +x 00:09:34.442 05:27:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:34.442 05:27:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:34.442 05:27:14 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:34.442 05:27:14 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:34.442 05:27:14 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:34.442 05:27:14 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:34.442 05:27:14 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:34.442 05:27:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.442 05:27:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.442 05:27:14 -- common/autotest_common.sh@10 -- # set +x 00:09:34.442 ************************************ 00:09:34.442 START TEST nvmf_tcp 00:09:34.442 ************************************ 00:09:34.442 05:27:14 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:34.442 * Looking for test storage... 00:09:34.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:34.442 05:27:14 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.442 05:27:14 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.442 05:27:14 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.442 05:27:14 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.442 05:27:14 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.443 05:27:14 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.443 --rc genhtml_branch_coverage=1 00:09:34.443 --rc genhtml_function_coverage=1 00:09:34.443 --rc genhtml_legend=1 00:09:34.443 --rc geninfo_all_blocks=1 00:09:34.443 --rc geninfo_unexecuted_blocks=1 00:09:34.443 00:09:34.443 ' 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.443 --rc genhtml_branch_coverage=1 00:09:34.443 --rc genhtml_function_coverage=1 00:09:34.443 --rc genhtml_legend=1 00:09:34.443 --rc geninfo_all_blocks=1 00:09:34.443 --rc geninfo_unexecuted_blocks=1 00:09:34.443 00:09:34.443 ' 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.443 --rc genhtml_branch_coverage=1 00:09:34.443 --rc genhtml_function_coverage=1 00:09:34.443 --rc genhtml_legend=1 00:09:34.443 --rc geninfo_all_blocks=1 00:09:34.443 --rc geninfo_unexecuted_blocks=1 00:09:34.443 00:09:34.443 ' 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.443 --rc genhtml_branch_coverage=1 00:09:34.443 --rc genhtml_function_coverage=1 00:09:34.443 --rc genhtml_legend=1 00:09:34.443 --rc geninfo_all_blocks=1 00:09:34.443 --rc geninfo_unexecuted_blocks=1 00:09:34.443 00:09:34.443 ' 00:09:34.443 05:27:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:34.443 05:27:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:34.443 05:27:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.443 05:27:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.702 ************************************ 00:09:34.702 START TEST nvmf_target_core 00:09:34.702 ************************************ 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:34.702 * Looking for test storage... 00:09:34.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.702 --rc genhtml_branch_coverage=1 00:09:34.702 --rc genhtml_function_coverage=1 00:09:34.702 --rc genhtml_legend=1 00:09:34.702 --rc geninfo_all_blocks=1 00:09:34.702 --rc geninfo_unexecuted_blocks=1 00:09:34.702 00:09:34.702 ' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.702 --rc genhtml_branch_coverage=1 00:09:34.702 --rc genhtml_function_coverage=1 00:09:34.702 --rc genhtml_legend=1 00:09:34.702 --rc geninfo_all_blocks=1 00:09:34.702 --rc geninfo_unexecuted_blocks=1 00:09:34.702 00:09:34.702 ' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.702 --rc genhtml_branch_coverage=1 00:09:34.702 --rc genhtml_function_coverage=1 00:09:34.702 --rc genhtml_legend=1 00:09:34.702 --rc geninfo_all_blocks=1 00:09:34.702 --rc geninfo_unexecuted_blocks=1 00:09:34.702 00:09:34.702 ' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.702 --rc genhtml_branch_coverage=1 00:09:34.702 --rc genhtml_function_coverage=1 00:09:34.702 --rc genhtml_legend=1 00:09:34.702 --rc geninfo_all_blocks=1 00:09:34.702 --rc geninfo_unexecuted_blocks=1 00:09:34.702 00:09:34.702 ' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:34.702 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.703 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.703 ************************************ 00:09:34.703 START TEST nvmf_host_management 00:09:34.703 ************************************ 00:09:34.703 05:27:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:34.964 * Looking for test storage... 00:09:34.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.964 --rc genhtml_branch_coverage=1 00:09:34.964 --rc genhtml_function_coverage=1 00:09:34.964 --rc genhtml_legend=1 00:09:34.964 --rc geninfo_all_blocks=1 00:09:34.964 --rc geninfo_unexecuted_blocks=1 00:09:34.964 00:09:34.964 ' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.964 --rc genhtml_branch_coverage=1 00:09:34.964 --rc genhtml_function_coverage=1 00:09:34.964 --rc genhtml_legend=1 00:09:34.964 --rc geninfo_all_blocks=1 00:09:34.964 --rc geninfo_unexecuted_blocks=1 00:09:34.964 00:09:34.964 ' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.964 --rc genhtml_branch_coverage=1 00:09:34.964 --rc genhtml_function_coverage=1 00:09:34.964 --rc genhtml_legend=1 00:09:34.964 --rc geninfo_all_blocks=1 00:09:34.964 --rc geninfo_unexecuted_blocks=1 00:09:34.964 00:09:34.964 ' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.964 --rc genhtml_branch_coverage=1 00:09:34.964 --rc genhtml_function_coverage=1 00:09:34.964 --rc genhtml_legend=1 00:09:34.964 --rc geninfo_all_blocks=1 00:09:34.964 --rc geninfo_unexecuted_blocks=1 00:09:34.964 00:09:34.964 ' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:34.964 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:34.965 Cannot find device "nvmf_init_br" 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:34.965 Cannot find device "nvmf_init_br2" 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:34.965 Cannot find device "nvmf_tgt_br" 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.965 Cannot find device "nvmf_tgt_br2" 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:34.965 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.222 Cannot find device "nvmf_init_br" 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.222 Cannot find device "nvmf_init_br2" 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.222 Cannot find device "nvmf_tgt_br" 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.222 Cannot find device "nvmf_tgt_br2" 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.222 Cannot find device "nvmf_br" 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.222 Cannot find device "nvmf_init_if" 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.222 Cannot find device "nvmf_init_if2" 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.222 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:09:35.481 00:09:35.481 --- 10.0.0.3 ping statistics --- 00:09:35.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.481 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.109 ms 00:09:35.481 00:09:35.481 --- 10.0.0.4 ping statistics --- 00:09:35.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.481 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:35.481 00:09:35.481 --- 10.0.0.1 ping statistics --- 00:09:35.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.481 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:35.481 00:09:35.481 --- 10.0.0.2 ping statistics --- 00:09:35.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.481 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=66506 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 66506 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 66506 ']' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.481 05:27:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.740 [2024-12-16 05:27:15.810017] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:35.740 [2024-12-16 05:27:15.810213] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.740 [2024-12-16 05:27:15.997348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.998 [2024-12-16 05:27:16.133487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.998 [2024-12-16 05:27:16.133553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.998 [2024-12-16 05:27:16.133577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.998 [2024-12-16 05:27:16.133651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.998 [2024-12-16 05:27:16.133671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.998 [2024-12-16 05:27:16.136033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.998 [2024-12-16 05:27:16.136208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.998 [2024-12-16 05:27:16.136338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.998 [2024-12-16 05:27:16.136423] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.257 [2024-12-16 05:27:16.331891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.515 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.515 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:36.515 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.515 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.515 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 [2024-12-16 05:27:16.806742] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 Malloc0 00:09:36.774 [2024-12-16 05:27:16.932485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=66565 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 66565 /var/tmp/bdevperf.sock 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 66565 ']' 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.774 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.774 { 00:09:36.774 "params": { 00:09:36.774 "name": "Nvme$subsystem", 00:09:36.774 "trtype": "$TEST_TRANSPORT", 00:09:36.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.774 "adrfam": "ipv4", 00:09:36.774 "trsvcid": "$NVMF_PORT", 00:09:36.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.774 "hdgst": ${hdgst:-false}, 00:09:36.774 "ddgst": ${ddgst:-false} 00:09:36.774 }, 00:09:36.774 "method": "bdev_nvme_attach_controller" 00:09:36.774 } 00:09:36.775 EOF 00:09:36.775 )") 00:09:36.775 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:36.775 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:36.775 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:36.775 05:27:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.775 "params": { 00:09:36.775 "name": "Nvme0", 00:09:36.775 "trtype": "tcp", 00:09:36.775 "traddr": "10.0.0.3", 00:09:36.775 "adrfam": "ipv4", 00:09:36.775 "trsvcid": "4420", 00:09:36.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:36.775 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:36.775 "hdgst": false, 00:09:36.775 "ddgst": false 00:09:36.775 }, 00:09:36.775 "method": "bdev_nvme_attach_controller" 00:09:36.775 }' 00:09:37.033 [2024-12-16 05:27:17.088000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:37.033 [2024-12-16 05:27:17.088926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66565 ] 00:09:37.033 [2024-12-16 05:27:17.275942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.292 [2024-12-16 05:27:17.400897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.551 [2024-12-16 05:27:17.602138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.551 Running I/O for 10 seconds... 00:09:37.809 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.809 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.810 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.070 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.070 [2024-12-16 05:27:18.118001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.118274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.118458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.118624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.118817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.118957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.070 [2024-12-16 05:27:18.119671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.070 [2024-12-16 05:27:18.119698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.119973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.119989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.071 [2024-12-16 05:27:18.120962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.071 [2024-12-16 05:27:18.120975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.072 [2024-12-16 05:27:18.120990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:38.072 [2024-12-16 05:27:18.121003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.072 [2024-12-16 05:27:18.121018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:09:38.072 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.072 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:38.072 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.072 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.072 [2024-12-16 05:27:18.122739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:38.072 task offset: 56576 on job bdev=Nvme0n1 fails 00:09:38.072 00:09:38.072 Latency(us) 00:09:38.072 [2024-12-16T05:27:18.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.072 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:38.072 Job: Nvme0n1 ended in about 0.33 seconds with error 00:09:38.072 Verification LBA range: start 0x0 length 0x400 00:09:38.072 Nvme0n1 : 0.33 1169.32 73.08 194.89 0.00 45035.99 4051.32 43372.92 00:09:38.072 [2024-12-16T05:27:18.331Z] =================================================================================================================== 00:09:38.072 [2024-12-16T05:27:18.331Z] Total : 1169.32 73.08 194.89 0.00 45035.99 4051.32 43372.92 00:09:38.072 [2024-12-16 05:27:18.128103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.072 [2024-12-16 05:27:18.128180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:09:38.072 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.072 05:27:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:38.072 [2024-12-16 05:27:18.136705] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 66565 00:09:39.009 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (66565) - No such process 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.009 { 00:09:39.009 "params": { 00:09:39.009 "name": "Nvme$subsystem", 00:09:39.009 "trtype": "$TEST_TRANSPORT", 00:09:39.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.009 "adrfam": "ipv4", 00:09:39.009 "trsvcid": "$NVMF_PORT", 00:09:39.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.009 "hdgst": ${hdgst:-false}, 00:09:39.009 "ddgst": ${ddgst:-false} 00:09:39.009 }, 00:09:39.009 "method": "bdev_nvme_attach_controller" 00:09:39.009 } 00:09:39.009 EOF 00:09:39.009 )") 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:39.009 05:27:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.009 "params": { 00:09:39.009 "name": "Nvme0", 00:09:39.009 "trtype": "tcp", 00:09:39.009 "traddr": "10.0.0.3", 00:09:39.009 "adrfam": "ipv4", 00:09:39.009 "trsvcid": "4420", 00:09:39.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:39.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:39.009 "hdgst": false, 00:09:39.009 "ddgst": false 00:09:39.009 }, 00:09:39.009 "method": "bdev_nvme_attach_controller" 00:09:39.009 }' 00:09:39.009 [2024-12-16 05:27:19.247499] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:39.009 [2024-12-16 05:27:19.247940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66604 ] 00:09:39.268 [2024-12-16 05:27:19.429812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.268 [2024-12-16 05:27:19.525795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.528 [2024-12-16 05:27:19.698804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.786 Running I/O for 1 seconds... 00:09:40.745 1344.00 IOPS, 84.00 MiB/s 00:09:40.745 Latency(us) 00:09:40.745 [2024-12-16T05:27:21.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.745 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:40.745 Verification LBA range: start 0x0 length 0x400 00:09:40.745 Nvme0n1 : 1.01 1400.54 87.53 0.00 0.00 44867.53 7626.01 39083.29 00:09:40.745 [2024-12-16T05:27:21.004Z] =================================================================================================================== 00:09:40.745 [2024-12-16T05:27:21.004Z] Total : 1400.54 87.53 0.00 0.00 44867.53 7626.01 39083.29 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.681 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.681 rmmod nvme_tcp 00:09:41.940 rmmod nvme_fabrics 00:09:41.940 rmmod nvme_keyring 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 66506 ']' 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 66506 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 66506 ']' 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 66506 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.940 05:27:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66506 00:09:41.940 killing process with pid 66506 00:09:41.940 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:41.940 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:41.940 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66506' 00:09:41.940 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 66506 00:09:41.940 05:27:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 66506 00:09:42.877 [2024-12-16 05:27:23.024135] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:42.877 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:43.136 00:09:43.136 real 0m8.420s 00:09:43.136 user 0m31.265s 00:09:43.136 sys 0m1.775s 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.136 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:43.136 ************************************ 00:09:43.136 END TEST nvmf_host_management 00:09:43.136 ************************************ 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.397 ************************************ 00:09:43.397 START TEST nvmf_lvol 00:09:43.397 ************************************ 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:43.397 * Looking for test storage... 00:09:43.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.397 --rc genhtml_branch_coverage=1 00:09:43.397 --rc genhtml_function_coverage=1 00:09:43.397 --rc genhtml_legend=1 00:09:43.397 --rc geninfo_all_blocks=1 00:09:43.397 --rc geninfo_unexecuted_blocks=1 00:09:43.397 00:09:43.397 ' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.397 --rc genhtml_branch_coverage=1 00:09:43.397 --rc genhtml_function_coverage=1 00:09:43.397 --rc genhtml_legend=1 00:09:43.397 --rc geninfo_all_blocks=1 00:09:43.397 --rc geninfo_unexecuted_blocks=1 00:09:43.397 00:09:43.397 ' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.397 --rc genhtml_branch_coverage=1 00:09:43.397 --rc genhtml_function_coverage=1 00:09:43.397 --rc genhtml_legend=1 00:09:43.397 --rc geninfo_all_blocks=1 00:09:43.397 --rc geninfo_unexecuted_blocks=1 00:09:43.397 00:09:43.397 ' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.397 --rc genhtml_branch_coverage=1 00:09:43.397 --rc genhtml_function_coverage=1 00:09:43.397 --rc genhtml_legend=1 00:09:43.397 --rc geninfo_all_blocks=1 00:09:43.397 --rc geninfo_unexecuted_blocks=1 00:09:43.397 00:09:43.397 ' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.397 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.398 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:43.398 Cannot find device "nvmf_init_br" 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:43.398 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:43.657 Cannot find device "nvmf_init_br2" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:43.657 Cannot find device "nvmf_tgt_br" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.657 Cannot find device "nvmf_tgt_br2" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:43.657 Cannot find device "nvmf_init_br" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:43.657 Cannot find device "nvmf_init_br2" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:43.657 Cannot find device "nvmf_tgt_br" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:43.657 Cannot find device "nvmf_tgt_br2" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:43.657 Cannot find device "nvmf_br" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:43.657 Cannot find device "nvmf_init_if" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:43.657 Cannot find device "nvmf_init_if2" 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:43.657 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.658 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:43.917 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:43.917 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:09:43.917 00:09:43.917 --- 10.0.0.3 ping statistics --- 00:09:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.917 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:43.917 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:43.917 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:09:43.917 00:09:43.917 --- 10.0.0.4 ping statistics --- 00:09:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.917 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:43.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:43.917 00:09:43.917 --- 10.0.0.1 ping statistics --- 00:09:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.917 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:43.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:09:43.917 00:09:43.917 --- 10.0.0.2 ping statistics --- 00:09:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.917 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.917 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.918 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.918 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.918 05:27:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=66894 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 66894 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 66894 ']' 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.918 05:27:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:43.918 [2024-12-16 05:27:24.132290] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:43.918 [2024-12-16 05:27:24.132464] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.177 [2024-12-16 05:27:24.315649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.177 [2024-12-16 05:27:24.412104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.177 [2024-12-16 05:27:24.412205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.177 [2024-12-16 05:27:24.412226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.177 [2024-12-16 05:27:24.412239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.177 [2024-12-16 05:27:24.412255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.177 [2024-12-16 05:27:24.414049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.177 [2024-12-16 05:27:24.414172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.177 [2024-12-16 05:27:24.414233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.436 [2024-12-16 05:27:24.594706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.008 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.008 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:45.008 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.008 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.008 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:45.008 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.008 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.275 [2024-12-16 05:27:25.427379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.275 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.842 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:45.842 05:27:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.101 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:46.101 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:46.360 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:46.619 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=37f59e5d-76c8-4278-a149-d3f62b7a7f6c 00:09:46.619 05:27:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 37f59e5d-76c8-4278-a149-d3f62b7a7f6c lvol 20 00:09:46.878 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7d07e342-0d0e-45ca-a4ba-1af5648191c7 00:09:46.878 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.137 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d07e342-0d0e-45ca-a4ba-1af5648191c7 00:09:47.395 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:47.655 [2024-12-16 05:27:27.776798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:47.655 05:27:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:47.914 05:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66975 00:09:47.914 05:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:47.914 05:27:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:48.849 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7d07e342-0d0e-45ca-a4ba-1af5648191c7 MY_SNAPSHOT 00:09:49.416 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=309ccc15-c009-47b5-bd13-830cbe26c1a8 00:09:49.416 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7d07e342-0d0e-45ca-a4ba-1af5648191c7 30 00:09:49.701 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 309ccc15-c009-47b5-bd13-830cbe26c1a8 MY_CLONE 00:09:49.984 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=89ae7c1b-97df-49de-a720-82d44cd3e060 00:09:49.984 05:27:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 89ae7c1b-97df-49de-a720-82d44cd3e060 00:09:50.551 05:27:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66975 00:09:58.666 Initializing NVMe Controllers 00:09:58.666 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:58.666 Controller IO queue size 128, less than required. 00:09:58.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:58.666 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:58.666 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:58.666 Initialization complete. Launching workers. 00:09:58.666 ======================================================== 00:09:58.666 Latency(us) 00:09:58.666 Device Information : IOPS MiB/s Average min max 00:09:58.666 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9163.00 35.79 13982.45 279.95 168892.21 00:09:58.666 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9045.50 35.33 14163.78 4438.46 129447.94 00:09:58.666 ======================================================== 00:09:58.666 Total : 18208.50 71.13 14072.53 279.95 168892.21 00:09:58.666 00:09:58.666 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.666 05:27:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7d07e342-0d0e-45ca-a4ba-1af5648191c7 00:09:58.924 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37f59e5d-76c8-4278-a149-d3f62b7a7f6c 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.183 rmmod nvme_tcp 00:09:59.183 rmmod nvme_fabrics 00:09:59.183 rmmod nvme_keyring 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 66894 ']' 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 66894 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 66894 ']' 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 66894 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66894 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.183 killing process with pid 66894 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66894' 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 66894 00:09:59.183 05:27:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 66894 00:10:00.561 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.561 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.561 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.561 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:00.561 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:00.562 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:00.820 00:10:00.820 real 0m17.486s 00:10:00.820 user 1m9.782s 00:10:00.820 sys 0m4.241s 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 ************************************ 00:10:00.820 END TEST nvmf_lvol 00:10:00.820 ************************************ 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 ************************************ 00:10:00.820 START TEST nvmf_lvs_grow 00:10:00.820 ************************************ 00:10:00.820 05:27:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:00.820 * Looking for test storage... 00:10:00.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.820 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:00.820 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:10:00.820 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:01.080 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:01.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.081 --rc genhtml_branch_coverage=1 00:10:01.081 --rc genhtml_function_coverage=1 00:10:01.081 --rc genhtml_legend=1 00:10:01.081 --rc geninfo_all_blocks=1 00:10:01.081 --rc geninfo_unexecuted_blocks=1 00:10:01.081 00:10:01.081 ' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:01.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.081 --rc genhtml_branch_coverage=1 00:10:01.081 --rc genhtml_function_coverage=1 00:10:01.081 --rc genhtml_legend=1 00:10:01.081 --rc geninfo_all_blocks=1 00:10:01.081 --rc geninfo_unexecuted_blocks=1 00:10:01.081 00:10:01.081 ' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:01.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.081 --rc genhtml_branch_coverage=1 00:10:01.081 --rc genhtml_function_coverage=1 00:10:01.081 --rc genhtml_legend=1 00:10:01.081 --rc geninfo_all_blocks=1 00:10:01.081 --rc geninfo_unexecuted_blocks=1 00:10:01.081 00:10:01.081 ' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:01.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.081 --rc genhtml_branch_coverage=1 00:10:01.081 --rc genhtml_function_coverage=1 00:10:01.081 --rc genhtml_legend=1 00:10:01.081 --rc geninfo_all_blocks=1 00:10:01.081 --rc geninfo_unexecuted_blocks=1 00:10:01.081 00:10:01.081 ' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.081 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:01.081 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:01.082 Cannot find device "nvmf_init_br" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:01.082 Cannot find device "nvmf_init_br2" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:01.082 Cannot find device "nvmf_tgt_br" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.082 Cannot find device "nvmf_tgt_br2" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:01.082 Cannot find device "nvmf_init_br" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:01.082 Cannot find device "nvmf_init_br2" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:01.082 Cannot find device "nvmf_tgt_br" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:01.082 Cannot find device "nvmf_tgt_br2" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:01.082 Cannot find device "nvmf_br" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:01.082 Cannot find device "nvmf_init_if" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:01.082 Cannot find device "nvmf_init_if2" 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:01.082 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:01.341 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:01.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:01.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:10:01.342 00:10:01.342 --- 10.0.0.3 ping statistics --- 00:10:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.342 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:01.342 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:01.342 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:10:01.342 00:10:01.342 --- 10.0.0.4 ping statistics --- 00:10:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.342 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:01.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:01.342 00:10:01.342 --- 10.0.0.1 ping statistics --- 00:10:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.342 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:01.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:01.342 00:10:01.342 --- 10.0.0.2 ping statistics --- 00:10:01.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.342 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=67369 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 67369 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 67369 ']' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.342 05:27:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.601 [2024-12-16 05:27:41.681359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:01.601 [2024-12-16 05:27:41.681503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.601 [2024-12-16 05:27:41.856725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.877 [2024-12-16 05:27:41.982617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.877 [2024-12-16 05:27:41.982705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.877 [2024-12-16 05:27:41.982741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.877 [2024-12-16 05:27:41.982769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.877 [2024-12-16 05:27:41.982786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.877 [2024-12-16 05:27:41.984282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.168 [2024-12-16 05:27:42.200489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.427 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.427 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:02.427 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.427 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.427 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.686 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.687 05:27:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.946 [2024-12-16 05:27:42.982929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:02.946 ************************************ 00:10:02.946 START TEST lvs_grow_clean 00:10:02.946 ************************************ 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:02.946 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.205 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:03.205 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:03.465 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:03.465 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:03.465 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:03.725 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:03.725 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:03.725 05:27:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0f767086-5ccf-416c-9c21-11d08e837ff7 lvol 150 00:10:03.984 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7cf4d329-2c38-447b-b4a2-19c5d3f4cf75 00:10:03.984 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.984 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:04.244 [2024-12-16 05:27:44.302808] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:04.244 [2024-12-16 05:27:44.302941] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:04.244 true 00:10:04.244 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:04.244 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:04.503 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:04.503 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:04.762 05:27:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7cf4d329-2c38-447b-b4a2-19c5d3f4cf75 00:10:04.762 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:05.026 [2024-12-16 05:27:45.247653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:05.026 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67457 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67457 /var/tmp/bdevperf.sock 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 67457 ']' 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.287 05:27:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:05.546 [2024-12-16 05:27:45.600404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:05.546 [2024-12-16 05:27:45.600562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67457 ] 00:10:05.546 [2024-12-16 05:27:45.774945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.805 [2024-12-16 05:27:45.900204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.805 [2024-12-16 05:27:46.063281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.373 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.373 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:06.373 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:06.632 Nvme0n1 00:10:06.632 05:27:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:06.890 [ 00:10:06.890 { 00:10:06.890 "name": "Nvme0n1", 00:10:06.890 "aliases": [ 00:10:06.890 "7cf4d329-2c38-447b-b4a2-19c5d3f4cf75" 00:10:06.890 ], 00:10:06.890 "product_name": "NVMe disk", 00:10:06.890 "block_size": 4096, 00:10:06.890 "num_blocks": 38912, 00:10:06.890 "uuid": "7cf4d329-2c38-447b-b4a2-19c5d3f4cf75", 00:10:06.890 "numa_id": -1, 00:10:06.890 "assigned_rate_limits": { 00:10:06.890 "rw_ios_per_sec": 0, 00:10:06.890 "rw_mbytes_per_sec": 0, 00:10:06.890 "r_mbytes_per_sec": 0, 00:10:06.890 "w_mbytes_per_sec": 0 00:10:06.890 }, 00:10:06.890 "claimed": false, 00:10:06.890 "zoned": false, 00:10:06.890 "supported_io_types": { 00:10:06.890 "read": true, 00:10:06.890 "write": true, 00:10:06.890 "unmap": true, 00:10:06.890 "flush": true, 00:10:06.890 "reset": true, 00:10:06.890 "nvme_admin": true, 00:10:06.890 "nvme_io": true, 00:10:06.890 "nvme_io_md": false, 00:10:06.890 "write_zeroes": true, 00:10:06.890 "zcopy": false, 00:10:06.890 "get_zone_info": false, 00:10:06.890 "zone_management": false, 00:10:06.890 "zone_append": false, 00:10:06.890 "compare": true, 00:10:06.890 "compare_and_write": true, 00:10:06.890 "abort": true, 00:10:06.890 "seek_hole": false, 00:10:06.890 "seek_data": false, 00:10:06.890 "copy": true, 00:10:06.890 "nvme_iov_md": false 00:10:06.890 }, 00:10:06.890 "memory_domains": [ 00:10:06.890 { 00:10:06.890 "dma_device_id": "system", 00:10:06.890 "dma_device_type": 1 00:10:06.890 } 00:10:06.890 ], 00:10:06.890 "driver_specific": { 00:10:06.890 "nvme": [ 00:10:06.890 { 00:10:06.890 "trid": { 00:10:06.890 "trtype": "TCP", 00:10:06.890 "adrfam": "IPv4", 00:10:06.890 "traddr": "10.0.0.3", 00:10:06.890 "trsvcid": "4420", 00:10:06.890 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:06.890 }, 00:10:06.890 "ctrlr_data": { 00:10:06.890 "cntlid": 1, 00:10:06.890 "vendor_id": "0x8086", 00:10:06.890 "model_number": "SPDK bdev Controller", 00:10:06.890 "serial_number": "SPDK0", 00:10:06.890 "firmware_revision": "25.01", 00:10:06.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:06.890 "oacs": { 00:10:06.890 "security": 0, 00:10:06.890 "format": 0, 00:10:06.890 "firmware": 0, 00:10:06.890 "ns_manage": 0 00:10:06.890 }, 00:10:06.890 "multi_ctrlr": true, 00:10:06.890 "ana_reporting": false 00:10:06.890 }, 00:10:06.890 "vs": { 00:10:06.890 "nvme_version": "1.3" 00:10:06.890 }, 00:10:06.890 "ns_data": { 00:10:06.890 "id": 1, 00:10:06.890 "can_share": true 00:10:06.890 } 00:10:06.890 } 00:10:06.890 ], 00:10:06.890 "mp_policy": "active_passive" 00:10:06.890 } 00:10:06.890 } 00:10:06.890 ] 00:10:06.890 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67481 00:10:06.890 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:06.890 05:27:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:07.148 Running I/O for 10 seconds... 00:10:08.080 Latency(us) 00:10:08.080 [2024-12-16T05:27:48.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.080 Nvme0n1 : 1.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:08.080 [2024-12-16T05:27:48.339Z] =================================================================================================================== 00:10:08.080 [2024-12-16T05:27:48.339Z] Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:08.080 00:10:09.014 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:09.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.014 Nvme0n1 : 2.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:09.014 [2024-12-16T05:27:49.273Z] =================================================================================================================== 00:10:09.014 [2024-12-16T05:27:49.273Z] Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:09.014 00:10:09.272 true 00:10:09.272 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:09.272 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:09.530 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:09.530 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:09.530 05:27:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 67481 00:10:10.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.096 Nvme0n1 : 3.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:10.096 [2024-12-16T05:27:50.355Z] =================================================================================================================== 00:10:10.096 [2024-12-16T05:27:50.355Z] Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:10.096 00:10:11.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.030 Nvme0n1 : 4.00 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:11.030 [2024-12-16T05:27:51.289Z] =================================================================================================================== 00:10:11.030 [2024-12-16T05:27:51.289Z] Total : 5969.00 23.32 0.00 0.00 0.00 0.00 0.00 00:10:11.030 00:10:12.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.403 Nvme0n1 : 5.00 5943.60 23.22 0.00 0.00 0.00 0.00 0.00 00:10:12.403 [2024-12-16T05:27:52.662Z] =================================================================================================================== 00:10:12.403 [2024-12-16T05:27:52.662Z] Total : 5943.60 23.22 0.00 0.00 0.00 0.00 0.00 00:10:12.403 00:10:13.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.336 Nvme0n1 : 6.00 5926.67 23.15 0.00 0.00 0.00 0.00 0.00 00:10:13.336 [2024-12-16T05:27:53.596Z] =================================================================================================================== 00:10:13.337 [2024-12-16T05:27:53.596Z] Total : 5926.67 23.15 0.00 0.00 0.00 0.00 0.00 00:10:13.337 00:10:14.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.270 Nvme0n1 : 7.00 5914.57 23.10 0.00 0.00 0.00 0.00 0.00 00:10:14.270 [2024-12-16T05:27:54.529Z] =================================================================================================================== 00:10:14.270 [2024-12-16T05:27:54.529Z] Total : 5914.57 23.10 0.00 0.00 0.00 0.00 0.00 00:10:14.270 00:10:15.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.203 Nvme0n1 : 8.00 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:10:15.203 [2024-12-16T05:27:55.462Z] =================================================================================================================== 00:10:15.203 [2024-12-16T05:27:55.462Z] Total : 5905.50 23.07 0.00 0.00 0.00 0.00 0.00 00:10:15.203 00:10:16.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.144 Nvme0n1 : 9.00 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:10:16.144 [2024-12-16T05:27:56.403Z] =================================================================================================================== 00:10:16.144 [2024-12-16T05:27:56.403Z] Total : 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:10:16.144 00:10:17.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.080 Nvme0n1 : 10.00 5854.70 22.87 0.00 0.00 0.00 0.00 0.00 00:10:17.080 [2024-12-16T05:27:57.339Z] =================================================================================================================== 00:10:17.080 [2024-12-16T05:27:57.339Z] Total : 5854.70 22.87 0.00 0.00 0.00 0.00 0.00 00:10:17.080 00:10:17.080 00:10:17.080 Latency(us) 00:10:17.080 [2024-12-16T05:27:57.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.080 Nvme0n1 : 10.02 5854.62 22.87 0.00 0.00 21854.21 18588.39 49092.42 00:10:17.080 [2024-12-16T05:27:57.339Z] =================================================================================================================== 00:10:17.080 [2024-12-16T05:27:57.339Z] Total : 5854.62 22.87 0.00 0.00 21854.21 18588.39 49092.42 00:10:17.080 { 00:10:17.080 "results": [ 00:10:17.080 { 00:10:17.080 "job": "Nvme0n1", 00:10:17.080 "core_mask": "0x2", 00:10:17.080 "workload": "randwrite", 00:10:17.080 "status": "finished", 00:10:17.080 "queue_depth": 128, 00:10:17.080 "io_size": 4096, 00:10:17.080 "runtime": 10.021998, 00:10:17.080 "iops": 5854.62100471383, 00:10:17.080 "mibps": 22.8696132996634, 00:10:17.080 "io_failed": 0, 00:10:17.080 "io_timeout": 0, 00:10:17.080 "avg_latency_us": 21854.20922164465, 00:10:17.080 "min_latency_us": 18588.392727272727, 00:10:17.080 "max_latency_us": 49092.42181818182 00:10:17.080 } 00:10:17.080 ], 00:10:17.080 "core_count": 1 00:10:17.080 } 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67457 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 67457 ']' 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 67457 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67457 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:17.080 killing process with pid 67457 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67457' 00:10:17.080 Received shutdown signal, test time was about 10.000000 seconds 00:10:17.080 00:10:17.080 Latency(us) 00:10:17.080 [2024-12-16T05:27:57.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.080 [2024-12-16T05:27:57.339Z] =================================================================================================================== 00:10:17.080 [2024-12-16T05:27:57.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 67457 00:10:17.080 05:27:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 67457 00:10:18.018 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:18.277 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:18.536 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:18.536 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:18.796 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:18.796 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:18.796 05:27:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:19.054 [2024-12-16 05:27:59.168572] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.054 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.055 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.055 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:19.055 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:19.314 request: 00:10:19.314 { 00:10:19.314 "uuid": "0f767086-5ccf-416c-9c21-11d08e837ff7", 00:10:19.314 "method": "bdev_lvol_get_lvstores", 00:10:19.314 "req_id": 1 00:10:19.314 } 00:10:19.314 Got JSON-RPC error response 00:10:19.314 response: 00:10:19.314 { 00:10:19.314 "code": -19, 00:10:19.314 "message": "No such device" 00:10:19.314 } 00:10:19.314 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:19.314 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:19.314 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:19.314 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:19.314 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:19.573 aio_bdev 00:10:19.573 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7cf4d329-2c38-447b-b4a2-19c5d3f4cf75 00:10:19.573 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7cf4d329-2c38-447b-b4a2-19c5d3f4cf75 00:10:19.573 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.573 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:19.573 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.573 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.573 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:19.832 05:27:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cf4d329-2c38-447b-b4a2-19c5d3f4cf75 -t 2000 00:10:20.091 [ 00:10:20.091 { 00:10:20.091 "name": "7cf4d329-2c38-447b-b4a2-19c5d3f4cf75", 00:10:20.091 "aliases": [ 00:10:20.091 "lvs/lvol" 00:10:20.091 ], 00:10:20.091 "product_name": "Logical Volume", 00:10:20.091 "block_size": 4096, 00:10:20.091 "num_blocks": 38912, 00:10:20.091 "uuid": "7cf4d329-2c38-447b-b4a2-19c5d3f4cf75", 00:10:20.091 "assigned_rate_limits": { 00:10:20.091 "rw_ios_per_sec": 0, 00:10:20.091 "rw_mbytes_per_sec": 0, 00:10:20.091 "r_mbytes_per_sec": 0, 00:10:20.091 "w_mbytes_per_sec": 0 00:10:20.091 }, 00:10:20.091 "claimed": false, 00:10:20.091 "zoned": false, 00:10:20.091 "supported_io_types": { 00:10:20.091 "read": true, 00:10:20.091 "write": true, 00:10:20.091 "unmap": true, 00:10:20.091 "flush": false, 00:10:20.091 "reset": true, 00:10:20.091 "nvme_admin": false, 00:10:20.091 "nvme_io": false, 00:10:20.092 "nvme_io_md": false, 00:10:20.092 "write_zeroes": true, 00:10:20.092 "zcopy": false, 00:10:20.092 "get_zone_info": false, 00:10:20.092 "zone_management": false, 00:10:20.092 "zone_append": false, 00:10:20.092 "compare": false, 00:10:20.092 "compare_and_write": false, 00:10:20.092 "abort": false, 00:10:20.092 "seek_hole": true, 00:10:20.092 "seek_data": true, 00:10:20.092 "copy": false, 00:10:20.092 "nvme_iov_md": false 00:10:20.092 }, 00:10:20.092 "driver_specific": { 00:10:20.092 "lvol": { 00:10:20.092 "lvol_store_uuid": "0f767086-5ccf-416c-9c21-11d08e837ff7", 00:10:20.092 "base_bdev": "aio_bdev", 00:10:20.092 "thin_provision": false, 00:10:20.092 "num_allocated_clusters": 38, 00:10:20.092 "snapshot": false, 00:10:20.092 "clone": false, 00:10:20.092 "esnap_clone": false 00:10:20.092 } 00:10:20.092 } 00:10:20.092 } 00:10:20.092 ] 00:10:20.092 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:20.092 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:20.092 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:20.350 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:20.350 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:20.350 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:20.609 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:20.609 05:28:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7cf4d329-2c38-447b-b4a2-19c5d3f4cf75 00:10:20.874 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f767086-5ccf-416c-9c21-11d08e837ff7 00:10:21.135 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.393 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:21.959 ************************************ 00:10:21.959 END TEST lvs_grow_clean 00:10:21.959 ************************************ 00:10:21.959 00:10:21.959 real 0m18.906s 00:10:21.959 user 0m18.113s 00:10:21.959 sys 0m2.335s 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:21.959 ************************************ 00:10:21.959 START TEST lvs_grow_dirty 00:10:21.959 ************************************ 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:21.959 05:28:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:22.217 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:22.217 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:22.476 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:22.476 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:22.476 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:22.734 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:22.734 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:22.734 05:28:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 lvol 150 00:10:22.992 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=45690d4b-5f66-45ee-af00-7cb93ca55555 00:10:22.992 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.992 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:23.250 [2024-12-16 05:28:03.320213] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:23.250 [2024-12-16 05:28:03.320327] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:23.250 true 00:10:23.250 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:23.250 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:23.507 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:23.507 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:23.766 05:28:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45690d4b-5f66-45ee-af00-7cb93ca55555 00:10:24.024 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:24.282 [2024-12-16 05:28:04.381027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:24.282 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67739 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67739 /var/tmp/bdevperf.sock 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67739 ']' 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.540 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:24.540 [2024-12-16 05:28:04.739758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:24.540 [2024-12-16 05:28:04.739913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67739 ] 00:10:24.798 [2024-12-16 05:28:04.918159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.798 [2024-12-16 05:28:05.041821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.063 [2024-12-16 05:28:05.226554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:25.630 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.630 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:25.630 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:25.888 Nvme0n1 00:10:25.888 05:28:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:26.146 [ 00:10:26.146 { 00:10:26.146 "name": "Nvme0n1", 00:10:26.146 "aliases": [ 00:10:26.146 "45690d4b-5f66-45ee-af00-7cb93ca55555" 00:10:26.146 ], 00:10:26.146 "product_name": "NVMe disk", 00:10:26.146 "block_size": 4096, 00:10:26.146 "num_blocks": 38912, 00:10:26.146 "uuid": "45690d4b-5f66-45ee-af00-7cb93ca55555", 00:10:26.146 "numa_id": -1, 00:10:26.146 "assigned_rate_limits": { 00:10:26.146 "rw_ios_per_sec": 0, 00:10:26.146 "rw_mbytes_per_sec": 0, 00:10:26.146 "r_mbytes_per_sec": 0, 00:10:26.146 "w_mbytes_per_sec": 0 00:10:26.146 }, 00:10:26.146 "claimed": false, 00:10:26.146 "zoned": false, 00:10:26.146 "supported_io_types": { 00:10:26.146 "read": true, 00:10:26.146 "write": true, 00:10:26.146 "unmap": true, 00:10:26.146 "flush": true, 00:10:26.146 "reset": true, 00:10:26.146 "nvme_admin": true, 00:10:26.146 "nvme_io": true, 00:10:26.146 "nvme_io_md": false, 00:10:26.146 "write_zeroes": true, 00:10:26.146 "zcopy": false, 00:10:26.146 "get_zone_info": false, 00:10:26.146 "zone_management": false, 00:10:26.146 "zone_append": false, 00:10:26.146 "compare": true, 00:10:26.146 "compare_and_write": true, 00:10:26.146 "abort": true, 00:10:26.146 "seek_hole": false, 00:10:26.146 "seek_data": false, 00:10:26.146 "copy": true, 00:10:26.146 "nvme_iov_md": false 00:10:26.146 }, 00:10:26.146 "memory_domains": [ 00:10:26.146 { 00:10:26.146 "dma_device_id": "system", 00:10:26.146 "dma_device_type": 1 00:10:26.146 } 00:10:26.146 ], 00:10:26.146 "driver_specific": { 00:10:26.146 "nvme": [ 00:10:26.146 { 00:10:26.146 "trid": { 00:10:26.146 "trtype": "TCP", 00:10:26.146 "adrfam": "IPv4", 00:10:26.146 "traddr": "10.0.0.3", 00:10:26.147 "trsvcid": "4420", 00:10:26.147 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:26.147 }, 00:10:26.147 "ctrlr_data": { 00:10:26.147 "cntlid": 1, 00:10:26.147 "vendor_id": "0x8086", 00:10:26.147 "model_number": "SPDK bdev Controller", 00:10:26.147 "serial_number": "SPDK0", 00:10:26.147 "firmware_revision": "25.01", 00:10:26.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.147 "oacs": { 00:10:26.147 "security": 0, 00:10:26.147 "format": 0, 00:10:26.147 "firmware": 0, 00:10:26.147 "ns_manage": 0 00:10:26.147 }, 00:10:26.147 "multi_ctrlr": true, 00:10:26.147 "ana_reporting": false 00:10:26.147 }, 00:10:26.147 "vs": { 00:10:26.147 "nvme_version": "1.3" 00:10:26.147 }, 00:10:26.147 "ns_data": { 00:10:26.147 "id": 1, 00:10:26.147 "can_share": true 00:10:26.147 } 00:10:26.147 } 00:10:26.147 ], 00:10:26.147 "mp_policy": "active_passive" 00:10:26.147 } 00:10:26.147 } 00:10:26.147 ] 00:10:26.147 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.147 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67762 00:10:26.147 05:28:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:26.147 Running I/O for 10 seconds... 00:10:27.522 Latency(us) 00:10:27.522 [2024-12-16T05:28:07.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.522 Nvme0n1 : 1.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:10:27.522 [2024-12-16T05:28:07.781Z] =================================================================================================================== 00:10:27.522 [2024-12-16T05:28:07.781Z] Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:10:27.522 00:10:28.089 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:28.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.348 Nvme0n1 : 2.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:10:28.348 [2024-12-16T05:28:08.607Z] =================================================================================================================== 00:10:28.348 [2024-12-16T05:28:08.607Z] Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:10:28.348 00:10:28.348 true 00:10:28.348 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:28.348 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:28.607 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:28.607 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:28.607 05:28:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67762 00:10:29.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.174 Nvme0n1 : 3.00 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:10:29.174 [2024-12-16T05:28:09.433Z] =================================================================================================================== 00:10:29.174 [2024-12-16T05:28:09.433Z] Total : 5884.33 22.99 0.00 0.00 0.00 0.00 0.00 00:10:29.174 00:10:30.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.622 Nvme0n1 : 4.00 5926.25 23.15 0.00 0.00 0.00 0.00 0.00 00:10:30.622 [2024-12-16T05:28:10.881Z] =================================================================================================================== 00:10:30.622 [2024-12-16T05:28:10.881Z] Total : 5926.25 23.15 0.00 0.00 0.00 0.00 0.00 00:10:30.622 00:10:31.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.193 Nvme0n1 : 5.00 5858.60 22.89 0.00 0.00 0.00 0.00 0.00 00:10:31.193 [2024-12-16T05:28:11.452Z] =================================================================================================================== 00:10:31.193 [2024-12-16T05:28:11.452Z] Total : 5858.60 22.89 0.00 0.00 0.00 0.00 0.00 00:10:31.193 00:10:32.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.132 Nvme0n1 : 6.00 5792.33 22.63 0.00 0.00 0.00 0.00 0.00 00:10:32.132 [2024-12-16T05:28:12.391Z] =================================================================================================================== 00:10:32.132 [2024-12-16T05:28:12.391Z] Total : 5792.33 22.63 0.00 0.00 0.00 0.00 0.00 00:10:32.132 00:10:33.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.512 Nvme0n1 : 7.00 5745.00 22.44 0.00 0.00 0.00 0.00 0.00 00:10:33.512 [2024-12-16T05:28:13.771Z] =================================================================================================================== 00:10:33.512 [2024-12-16T05:28:13.771Z] Total : 5745.00 22.44 0.00 0.00 0.00 0.00 0.00 00:10:33.512 00:10:34.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.450 Nvme0n1 : 8.00 5709.50 22.30 0.00 0.00 0.00 0.00 0.00 00:10:34.450 [2024-12-16T05:28:14.709Z] =================================================================================================================== 00:10:34.450 [2024-12-16T05:28:14.709Z] Total : 5709.50 22.30 0.00 0.00 0.00 0.00 0.00 00:10:34.450 00:10:35.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.475 Nvme0n1 : 9.00 5681.89 22.19 0.00 0.00 0.00 0.00 0.00 00:10:35.475 [2024-12-16T05:28:15.734Z] =================================================================================================================== 00:10:35.475 [2024-12-16T05:28:15.734Z] Total : 5681.89 22.19 0.00 0.00 0.00 0.00 0.00 00:10:35.475 00:10:36.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.413 Nvme0n1 : 10.00 5659.80 22.11 0.00 0.00 0.00 0.00 0.00 00:10:36.413 [2024-12-16T05:28:16.672Z] =================================================================================================================== 00:10:36.413 [2024-12-16T05:28:16.672Z] Total : 5659.80 22.11 0.00 0.00 0.00 0.00 0.00 00:10:36.413 00:10:36.413 00:10:36.413 Latency(us) 00:10:36.413 [2024-12-16T05:28:16.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.413 Nvme0n1 : 10.01 5668.42 22.14 0.00 0.00 22574.56 4647.10 47900.86 00:10:36.413 [2024-12-16T05:28:16.672Z] =================================================================================================================== 00:10:36.413 [2024-12-16T05:28:16.672Z] Total : 5668.42 22.14 0.00 0.00 22574.56 4647.10 47900.86 00:10:36.413 { 00:10:36.413 "results": [ 00:10:36.413 { 00:10:36.413 "job": "Nvme0n1", 00:10:36.413 "core_mask": "0x2", 00:10:36.413 "workload": "randwrite", 00:10:36.413 "status": "finished", 00:10:36.413 "queue_depth": 128, 00:10:36.413 "io_size": 4096, 00:10:36.413 "runtime": 10.007376, 00:10:36.414 "iops": 5668.418974164656, 00:10:36.414 "mibps": 22.142261617830687, 00:10:36.414 "io_failed": 0, 00:10:36.414 "io_timeout": 0, 00:10:36.414 "avg_latency_us": 22574.557073267668, 00:10:36.414 "min_latency_us": 4647.098181818182, 00:10:36.414 "max_latency_us": 47900.858181818185 00:10:36.414 } 00:10:36.414 ], 00:10:36.414 "core_count": 1 00:10:36.414 } 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67739 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 67739 ']' 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 67739 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67739 00:10:36.414 killing process with pid 67739 00:10:36.414 Received shutdown signal, test time was about 10.000000 seconds 00:10:36.414 00:10:36.414 Latency(us) 00:10:36.414 [2024-12-16T05:28:16.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.414 [2024-12-16T05:28:16.673Z] =================================================================================================================== 00:10:36.414 [2024-12-16T05:28:16.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67739' 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 67739 00:10:36.414 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 67739 00:10:37.352 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:37.610 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.869 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:37.869 05:28:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 67369 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 67369 00:10:38.128 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 67369 Killed "${NVMF_APP[@]}" "$@" 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67907 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67907 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67907 ']' 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.128 05:28:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:38.387 [2024-12-16 05:28:18.427075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:38.387 [2024-12-16 05:28:18.427527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.387 [2024-12-16 05:28:18.614425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.647 [2024-12-16 05:28:18.712623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.647 [2024-12-16 05:28:18.713006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.647 [2024-12-16 05:28:18.713180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.647 [2024-12-16 05:28:18.713450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.647 [2024-12-16 05:28:18.713501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.647 [2024-12-16 05:28:18.714794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.647 [2024-12-16 05:28:18.879349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.215 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.215 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:39.215 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.215 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.215 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:39.215 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.215 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:39.473 [2024-12-16 05:28:19.640048] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:39.473 [2024-12-16 05:28:19.640682] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:39.473 [2024-12-16 05:28:19.640873] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 45690d4b-5f66-45ee-af00-7cb93ca55555 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=45690d4b-5f66-45ee-af00-7cb93ca55555 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.473 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:39.732 05:28:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45690d4b-5f66-45ee-af00-7cb93ca55555 -t 2000 00:10:39.989 [ 00:10:39.989 { 00:10:39.989 "name": "45690d4b-5f66-45ee-af00-7cb93ca55555", 00:10:39.989 "aliases": [ 00:10:39.989 "lvs/lvol" 00:10:39.989 ], 00:10:39.989 "product_name": "Logical Volume", 00:10:39.989 "block_size": 4096, 00:10:39.989 "num_blocks": 38912, 00:10:39.990 "uuid": "45690d4b-5f66-45ee-af00-7cb93ca55555", 00:10:39.990 "assigned_rate_limits": { 00:10:39.990 "rw_ios_per_sec": 0, 00:10:39.990 "rw_mbytes_per_sec": 0, 00:10:39.990 "r_mbytes_per_sec": 0, 00:10:39.990 "w_mbytes_per_sec": 0 00:10:39.990 }, 00:10:39.990 "claimed": false, 00:10:39.990 "zoned": false, 00:10:39.990 "supported_io_types": { 00:10:39.990 "read": true, 00:10:39.990 "write": true, 00:10:39.990 "unmap": true, 00:10:39.990 "flush": false, 00:10:39.990 "reset": true, 00:10:39.990 "nvme_admin": false, 00:10:39.990 "nvme_io": false, 00:10:39.990 "nvme_io_md": false, 00:10:39.990 "write_zeroes": true, 00:10:39.990 "zcopy": false, 00:10:39.990 "get_zone_info": false, 00:10:39.990 "zone_management": false, 00:10:39.990 "zone_append": false, 00:10:39.990 "compare": false, 00:10:39.990 "compare_and_write": false, 00:10:39.990 "abort": false, 00:10:39.990 "seek_hole": true, 00:10:39.990 "seek_data": true, 00:10:39.990 "copy": false, 00:10:39.990 "nvme_iov_md": false 00:10:39.990 }, 00:10:39.990 "driver_specific": { 00:10:39.990 "lvol": { 00:10:39.990 "lvol_store_uuid": "44338abd-99e2-4e9d-88fc-eb043bf82c73", 00:10:39.990 "base_bdev": "aio_bdev", 00:10:39.990 "thin_provision": false, 00:10:39.990 "num_allocated_clusters": 38, 00:10:39.990 "snapshot": false, 00:10:39.990 "clone": false, 00:10:39.990 "esnap_clone": false 00:10:39.990 } 00:10:39.990 } 00:10:39.990 } 00:10:39.990 ] 00:10:39.990 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:39.990 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:39.990 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:40.557 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:40.557 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:40.557 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:40.816 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:40.816 05:28:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:40.816 [2024-12-16 05:28:21.033555] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:40.816 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:40.816 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:40.816 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:40.816 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:41.075 request: 00:10:41.075 { 00:10:41.075 "uuid": "44338abd-99e2-4e9d-88fc-eb043bf82c73", 00:10:41.075 "method": "bdev_lvol_get_lvstores", 00:10:41.075 "req_id": 1 00:10:41.075 } 00:10:41.075 Got JSON-RPC error response 00:10:41.075 response: 00:10:41.075 { 00:10:41.075 "code": -19, 00:10:41.075 "message": "No such device" 00:10:41.075 } 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:41.075 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:41.334 aio_bdev 00:10:41.334 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 45690d4b-5f66-45ee-af00-7cb93ca55555 00:10:41.334 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=45690d4b-5f66-45ee-af00-7cb93ca55555 00:10:41.334 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.334 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:41.334 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.334 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.334 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:41.902 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45690d4b-5f66-45ee-af00-7cb93ca55555 -t 2000 00:10:41.902 [ 00:10:41.902 { 00:10:41.902 "name": "45690d4b-5f66-45ee-af00-7cb93ca55555", 00:10:41.902 "aliases": [ 00:10:41.902 "lvs/lvol" 00:10:41.902 ], 00:10:41.902 "product_name": "Logical Volume", 00:10:41.902 "block_size": 4096, 00:10:41.902 "num_blocks": 38912, 00:10:41.902 "uuid": "45690d4b-5f66-45ee-af00-7cb93ca55555", 00:10:41.902 "assigned_rate_limits": { 00:10:41.902 "rw_ios_per_sec": 0, 00:10:41.902 "rw_mbytes_per_sec": 0, 00:10:41.902 "r_mbytes_per_sec": 0, 00:10:41.902 "w_mbytes_per_sec": 0 00:10:41.902 }, 00:10:41.902 "claimed": false, 00:10:41.902 "zoned": false, 00:10:41.902 "supported_io_types": { 00:10:41.902 "read": true, 00:10:41.902 "write": true, 00:10:41.902 "unmap": true, 00:10:41.902 "flush": false, 00:10:41.902 "reset": true, 00:10:41.902 "nvme_admin": false, 00:10:41.902 "nvme_io": false, 00:10:41.902 "nvme_io_md": false, 00:10:41.902 "write_zeroes": true, 00:10:41.902 "zcopy": false, 00:10:41.902 "get_zone_info": false, 00:10:41.902 "zone_management": false, 00:10:41.902 "zone_append": false, 00:10:41.902 "compare": false, 00:10:41.902 "compare_and_write": false, 00:10:41.902 "abort": false, 00:10:41.902 "seek_hole": true, 00:10:41.902 "seek_data": true, 00:10:41.902 "copy": false, 00:10:41.902 "nvme_iov_md": false 00:10:41.902 }, 00:10:41.902 "driver_specific": { 00:10:41.902 "lvol": { 00:10:41.902 "lvol_store_uuid": "44338abd-99e2-4e9d-88fc-eb043bf82c73", 00:10:41.902 "base_bdev": "aio_bdev", 00:10:41.902 "thin_provision": false, 00:10:41.902 "num_allocated_clusters": 38, 00:10:41.902 "snapshot": false, 00:10:41.902 "clone": false, 00:10:41.902 "esnap_clone": false 00:10:41.902 } 00:10:41.902 } 00:10:41.902 } 00:10:41.902 ] 00:10:41.902 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:41.902 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:41.902 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:42.161 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:42.161 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:42.161 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:42.419 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:42.419 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 45690d4b-5f66-45ee-af00-7cb93ca55555 00:10:42.678 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 44338abd-99e2-4e9d-88fc-eb043bf82c73 00:10:43.244 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.244 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:43.811 ************************************ 00:10:43.811 END TEST lvs_grow_dirty 00:10:43.811 ************************************ 00:10:43.811 00:10:43.811 real 0m21.805s 00:10:43.811 user 0m46.635s 00:10:43.811 sys 0m7.792s 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:43.811 nvmf_trace.0 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.811 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.070 rmmod nvme_tcp 00:10:44.070 rmmod nvme_fabrics 00:10:44.070 rmmod nvme_keyring 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67907 ']' 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67907 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67907 ']' 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67907 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67907 00:10:44.070 killing process with pid 67907 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67907' 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67907 00:10:44.070 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67907 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:45.005 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:45.263 ************************************ 00:10:45.263 END TEST nvmf_lvs_grow 00:10:45.263 ************************************ 00:10:45.263 00:10:45.263 real 0m44.379s 00:10:45.263 user 1m12.228s 00:10:45.263 sys 0m11.040s 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.263 ************************************ 00:10:45.263 START TEST nvmf_bdev_io_wait 00:10:45.263 ************************************ 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:45.263 * Looking for test storage... 00:10:45.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:45.263 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:45.522 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.523 --rc genhtml_branch_coverage=1 00:10:45.523 --rc genhtml_function_coverage=1 00:10:45.523 --rc genhtml_legend=1 00:10:45.523 --rc geninfo_all_blocks=1 00:10:45.523 --rc geninfo_unexecuted_blocks=1 00:10:45.523 00:10:45.523 ' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.523 --rc genhtml_branch_coverage=1 00:10:45.523 --rc genhtml_function_coverage=1 00:10:45.523 --rc genhtml_legend=1 00:10:45.523 --rc geninfo_all_blocks=1 00:10:45.523 --rc geninfo_unexecuted_blocks=1 00:10:45.523 00:10:45.523 ' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.523 --rc genhtml_branch_coverage=1 00:10:45.523 --rc genhtml_function_coverage=1 00:10:45.523 --rc genhtml_legend=1 00:10:45.523 --rc geninfo_all_blocks=1 00:10:45.523 --rc geninfo_unexecuted_blocks=1 00:10:45.523 00:10:45.523 ' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.523 --rc genhtml_branch_coverage=1 00:10:45.523 --rc genhtml_function_coverage=1 00:10:45.523 --rc genhtml_legend=1 00:10:45.523 --rc geninfo_all_blocks=1 00:10:45.523 --rc geninfo_unexecuted_blocks=1 00:10:45.523 00:10:45.523 ' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.523 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:45.523 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:45.524 Cannot find device "nvmf_init_br" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:45.524 Cannot find device "nvmf_init_br2" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:45.524 Cannot find device "nvmf_tgt_br" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.524 Cannot find device "nvmf_tgt_br2" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:45.524 Cannot find device "nvmf_init_br" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:45.524 Cannot find device "nvmf_init_br2" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:45.524 Cannot find device "nvmf_tgt_br" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:45.524 Cannot find device "nvmf_tgt_br2" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:45.524 Cannot find device "nvmf_br" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:45.524 Cannot find device "nvmf_init_if" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:45.524 Cannot find device "nvmf_init_if2" 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:45.524 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:45.783 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:45.783 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:45.783 00:10:45.783 --- 10.0.0.3 ping statistics --- 00:10:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.783 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:45.783 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:45.783 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:45.783 00:10:45.783 --- 10.0.0.4 ping statistics --- 00:10:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.783 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:45.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:45.783 00:10:45.783 --- 10.0.0.1 ping statistics --- 00:10:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.783 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:45.783 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:45.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:10:45.783 00:10:45.783 --- 10.0.0.2 ping statistics --- 00:10:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.783 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=68291 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 68291 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 68291 ']' 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.783 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 [2024-12-16 05:28:26.153692] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:46.042 [2024-12-16 05:28:26.153877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.300 [2024-12-16 05:28:26.339216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.300 [2024-12-16 05:28:26.429505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.300 [2024-12-16 05:28:26.429579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.300 [2024-12-16 05:28:26.429624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.300 [2024-12-16 05:28:26.429636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.300 [2024-12-16 05:28:26.429648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.300 [2024-12-16 05:28:26.431375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.300 [2024-12-16 05:28:26.431525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.300 [2024-12-16 05:28:26.431656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.300 [2024-12-16 05:28:26.431678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 [2024-12-16 05:28:27.349516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 [2024-12-16 05:28:27.370260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 Malloc0 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:47.234 [2024-12-16 05:28:27.466571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=68326 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=68328 00:10:47.234 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.235 { 00:10:47.235 "params": { 00:10:47.235 "name": "Nvme$subsystem", 00:10:47.235 "trtype": "$TEST_TRANSPORT", 00:10:47.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.235 "adrfam": "ipv4", 00:10:47.235 "trsvcid": "$NVMF_PORT", 00:10:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.235 "hdgst": ${hdgst:-false}, 00:10:47.235 "ddgst": ${ddgst:-false} 00:10:47.235 }, 00:10:47.235 "method": "bdev_nvme_attach_controller" 00:10:47.235 } 00:10:47.235 EOF 00:10:47.235 )") 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=68330 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.235 { 00:10:47.235 "params": { 00:10:47.235 "name": "Nvme$subsystem", 00:10:47.235 "trtype": "$TEST_TRANSPORT", 00:10:47.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.235 "adrfam": "ipv4", 00:10:47.235 "trsvcid": "$NVMF_PORT", 00:10:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.235 "hdgst": ${hdgst:-false}, 00:10:47.235 "ddgst": ${ddgst:-false} 00:10:47.235 }, 00:10:47.235 "method": "bdev_nvme_attach_controller" 00:10:47.235 } 00:10:47.235 EOF 00:10:47.235 )") 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=68333 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.235 { 00:10:47.235 "params": { 00:10:47.235 "name": "Nvme$subsystem", 00:10:47.235 "trtype": "$TEST_TRANSPORT", 00:10:47.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.235 "adrfam": "ipv4", 00:10:47.235 "trsvcid": "$NVMF_PORT", 00:10:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.235 "hdgst": ${hdgst:-false}, 00:10:47.235 "ddgst": ${ddgst:-false} 00:10:47.235 }, 00:10:47.235 "method": "bdev_nvme_attach_controller" 00:10:47.235 } 00:10:47.235 EOF 00:10:47.235 )") 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.235 "params": { 00:10:47.235 "name": "Nvme1", 00:10:47.235 "trtype": "tcp", 00:10:47.235 "traddr": "10.0.0.3", 00:10:47.235 "adrfam": "ipv4", 00:10:47.235 "trsvcid": "4420", 00:10:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.235 "hdgst": false, 00:10:47.235 "ddgst": false 00:10:47.235 }, 00:10:47.235 "method": "bdev_nvme_attach_controller" 00:10:47.235 }' 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.235 "params": { 00:10:47.235 "name": "Nvme1", 00:10:47.235 "trtype": "tcp", 00:10:47.235 "traddr": "10.0.0.3", 00:10:47.235 "adrfam": "ipv4", 00:10:47.235 "trsvcid": "4420", 00:10:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.235 "hdgst": false, 00:10:47.235 "ddgst": false 00:10:47.235 }, 00:10:47.235 "method": "bdev_nvme_attach_controller" 00:10:47.235 }' 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:47.235 { 00:10:47.235 "params": { 00:10:47.235 "name": "Nvme$subsystem", 00:10:47.235 "trtype": "$TEST_TRANSPORT", 00:10:47.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.235 "adrfam": "ipv4", 00:10:47.235 "trsvcid": "$NVMF_PORT", 00:10:47.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.235 "hdgst": ${hdgst:-false}, 00:10:47.235 "ddgst": ${ddgst:-false} 00:10:47.235 }, 00:10:47.235 "method": "bdev_nvme_attach_controller" 00:10:47.235 } 00:10:47.235 EOF 00:10:47.235 )") 00:10:47.235 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:47.493 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:47.493 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:47.493 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.493 "params": { 00:10:47.493 "name": "Nvme1", 00:10:47.493 "trtype": "tcp", 00:10:47.493 "traddr": "10.0.0.3", 00:10:47.493 "adrfam": "ipv4", 00:10:47.493 "trsvcid": "4420", 00:10:47.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.493 "hdgst": false, 00:10:47.493 "ddgst": false 00:10:47.493 }, 00:10:47.493 "method": "bdev_nvme_attach_controller" 00:10:47.493 }' 00:10:47.493 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:47.493 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:47.493 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:47.493 "params": { 00:10:47.493 "name": "Nvme1", 00:10:47.493 "trtype": "tcp", 00:10:47.493 "traddr": "10.0.0.3", 00:10:47.493 "adrfam": "ipv4", 00:10:47.493 "trsvcid": "4420", 00:10:47.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.493 "hdgst": false, 00:10:47.493 "ddgst": false 00:10:47.493 }, 00:10:47.493 "method": "bdev_nvme_attach_controller" 00:10:47.493 }' 00:10:47.493 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 68326 00:10:47.493 [2024-12-16 05:28:27.584264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:47.493 [2024-12-16 05:28:27.585011] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:47.493 [2024-12-16 05:28:27.597053] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:47.493 [2024-12-16 05:28:27.597198] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:47.493 [2024-12-16 05:28:27.606056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:47.493 [2024-12-16 05:28:27.606341] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:47.493 [2024-12-16 05:28:27.618140] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:47.493 [2024-12-16 05:28:27.618281] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:47.751 [2024-12-16 05:28:27.810201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.751 [2024-12-16 05:28:27.856574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.751 [2024-12-16 05:28:27.901750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.751 [2024-12-16 05:28:27.927426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:47.751 [2024-12-16 05:28:27.949851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.751 [2024-12-16 05:28:27.973319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:48.010 [2024-12-16 05:28:28.017859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:48.010 [2024-12-16 05:28:28.070157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:10:48.010 [2024-12-16 05:28:28.109757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.010 [2024-12-16 05:28:28.147883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.010 [2024-12-16 05:28:28.189796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.010 [2024-12-16 05:28:28.244304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.268 Running I/O for 1 seconds... 00:10:48.268 Running I/O for 1 seconds... 00:10:48.268 Running I/O for 1 seconds... 00:10:48.268 Running I/O for 1 seconds... 00:10:49.202 4884.00 IOPS, 19.08 MiB/s 00:10:49.202 Latency(us) 00:10:49.202 [2024-12-16T05:28:29.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.202 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:49.202 Nvme1n1 : 1.03 4892.55 19.11 0.00 0.00 25726.03 6553.60 43611.23 00:10:49.202 [2024-12-16T05:28:29.461Z] =================================================================================================================== 00:10:49.202 [2024-12-16T05:28:29.461Z] Total : 4892.55 19.11 0.00 0.00 25726.03 6553.60 43611.23 00:10:49.202 7136.00 IOPS, 27.88 MiB/s 00:10:49.202 Latency(us) 00:10:49.202 [2024-12-16T05:28:29.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.202 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:49.202 Nvme1n1 : 1.01 7184.01 28.06 0.00 0.00 17706.93 4527.94 28835.84 00:10:49.202 [2024-12-16T05:28:29.461Z] =================================================================================================================== 00:10:49.202 [2024-12-16T05:28:29.461Z] Total : 7184.01 28.06 0.00 0.00 17706.93 4527.94 28835.84 00:10:49.202 135216.00 IOPS, 528.19 MiB/s 00:10:49.202 Latency(us) 00:10:49.202 [2024-12-16T05:28:29.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.202 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:49.202 Nvme1n1 : 1.00 134902.80 526.96 0.00 0.00 943.91 463.59 2278.87 00:10:49.202 [2024-12-16T05:28:29.461Z] =================================================================================================================== 00:10:49.202 [2024-12-16T05:28:29.461Z] Total : 134902.80 526.96 0.00 0.00 943.91 463.59 2278.87 00:10:49.460 5066.00 IOPS, 19.79 MiB/s 00:10:49.460 Latency(us) 00:10:49.460 [2024-12-16T05:28:29.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.460 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:49.460 Nvme1n1 : 1.01 5192.18 20.28 0.00 0.00 24551.95 6881.28 55526.87 00:10:49.460 [2024-12-16T05:28:29.719Z] =================================================================================================================== 00:10:49.460 [2024-12-16T05:28:29.719Z] Total : 5192.18 20.28 0.00 0.00 24551.95 6881.28 55526.87 00:10:49.718 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 68328 00:10:49.718 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 68330 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 68333 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.976 rmmod nvme_tcp 00:10:49.976 rmmod nvme_fabrics 00:10:49.976 rmmod nvme_keyring 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 68291 ']' 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 68291 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 68291 ']' 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 68291 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68291 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.976 killing process with pid 68291 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68291' 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 68291 00:10:49.976 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 68291 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:50.910 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:51.169 00:10:51.169 real 0m5.942s 00:10:51.169 user 0m25.532s 00:10:51.169 sys 0m2.515s 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.169 ************************************ 00:10:51.169 END TEST nvmf_bdev_io_wait 00:10:51.169 ************************************ 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.169 ************************************ 00:10:51.169 START TEST nvmf_queue_depth 00:10:51.169 ************************************ 00:10:51.169 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:51.430 * Looking for test storage... 00:10:51.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.430 --rc genhtml_branch_coverage=1 00:10:51.430 --rc genhtml_function_coverage=1 00:10:51.430 --rc genhtml_legend=1 00:10:51.430 --rc geninfo_all_blocks=1 00:10:51.430 --rc geninfo_unexecuted_blocks=1 00:10:51.430 00:10:51.430 ' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.430 --rc genhtml_branch_coverage=1 00:10:51.430 --rc genhtml_function_coverage=1 00:10:51.430 --rc genhtml_legend=1 00:10:51.430 --rc geninfo_all_blocks=1 00:10:51.430 --rc geninfo_unexecuted_blocks=1 00:10:51.430 00:10:51.430 ' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.430 --rc genhtml_branch_coverage=1 00:10:51.430 --rc genhtml_function_coverage=1 00:10:51.430 --rc genhtml_legend=1 00:10:51.430 --rc geninfo_all_blocks=1 00:10:51.430 --rc geninfo_unexecuted_blocks=1 00:10:51.430 00:10:51.430 ' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.430 --rc genhtml_branch_coverage=1 00:10:51.430 --rc genhtml_function_coverage=1 00:10:51.430 --rc genhtml_legend=1 00:10:51.430 --rc geninfo_all_blocks=1 00:10:51.430 --rc geninfo_unexecuted_blocks=1 00:10:51.430 00:10:51.430 ' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.430 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.430 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:51.431 Cannot find device "nvmf_init_br" 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:51.431 Cannot find device "nvmf_init_br2" 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:51.431 Cannot find device "nvmf_tgt_br" 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.431 Cannot find device "nvmf_tgt_br2" 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:51.431 Cannot find device "nvmf_init_br" 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:51.431 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:51.690 Cannot find device "nvmf_init_br2" 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:51.690 Cannot find device "nvmf_tgt_br" 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:51.690 Cannot find device "nvmf_tgt_br2" 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:51.690 Cannot find device "nvmf_br" 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:51.690 Cannot find device "nvmf_init_if" 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:51.690 Cannot find device "nvmf_init_if2" 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:51.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:51.690 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:51.690 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:51.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:51.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:10:51.949 00:10:51.949 --- 10.0.0.3 ping statistics --- 00:10:51.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.949 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:51.949 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:51.949 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:10:51.949 00:10:51.949 --- 10.0.0.4 ping statistics --- 00:10:51.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.949 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:51.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:51.949 00:10:51.949 --- 10.0.0.1 ping statistics --- 00:10:51.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.949 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:51.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:10:51.949 00:10:51.949 --- 10.0.0.2 ping statistics --- 00:10:51.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.949 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.949 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68649 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68649 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68649 ']' 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.949 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:51.949 [2024-12-16 05:28:32.126940] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:51.949 [2024-12-16 05:28:32.127777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.209 [2024-12-16 05:28:32.323319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.209 [2024-12-16 05:28:32.450200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.209 [2024-12-16 05:28:32.450284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.209 [2024-12-16 05:28:32.450320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.209 [2024-12-16 05:28:32.450347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.209 [2024-12-16 05:28:32.450365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.209 [2024-12-16 05:28:32.451825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.468 [2024-12-16 05:28:32.655653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.035 [2024-12-16 05:28:33.162450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.035 Malloc0 00:10:53.035 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.036 [2024-12-16 05:28:33.264348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68681 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68681 /var/tmp/bdevperf.sock 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68681 ']' 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.036 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.295 [2024-12-16 05:28:33.379370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:53.295 [2024-12-16 05:28:33.379539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68681 ] 00:10:53.553 [2024-12-16 05:28:33.567724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.554 [2024-12-16 05:28:33.694177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.813 [2024-12-16 05:28:33.878362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.381 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.381 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:54.381 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:54.381 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.381 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:54.381 NVMe0n1 00:10:54.381 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.381 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:54.381 Running I/O for 10 seconds... 00:10:56.728 5763.00 IOPS, 22.51 MiB/s [2024-12-16T05:28:37.925Z] 5632.00 IOPS, 22.00 MiB/s [2024-12-16T05:28:38.861Z] 5802.67 IOPS, 22.67 MiB/s [2024-12-16T05:28:39.798Z] 5856.25 IOPS, 22.88 MiB/s [2024-12-16T05:28:40.734Z] 5895.80 IOPS, 23.03 MiB/s [2024-12-16T05:28:41.670Z] 5973.67 IOPS, 23.33 MiB/s [2024-12-16T05:28:42.608Z] 6086.14 IOPS, 23.77 MiB/s [2024-12-16T05:28:43.985Z] 6197.62 IOPS, 24.21 MiB/s [2024-12-16T05:28:44.923Z] 6281.78 IOPS, 24.54 MiB/s [2024-12-16T05:28:44.923Z] 6401.00 IOPS, 25.00 MiB/s 00:11:04.664 Latency(us) 00:11:04.664 [2024-12-16T05:28:44.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.664 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:04.664 Verification LBA range: start 0x0 length 0x4000 00:11:04.664 NVMe0n1 : 10.07 6428.74 25.11 0.00 0.00 158493.30 11736.90 118203.11 00:11:04.664 [2024-12-16T05:28:44.923Z] =================================================================================================================== 00:11:04.664 [2024-12-16T05:28:44.923Z] Total : 6428.74 25.11 0.00 0.00 158493.30 11736.90 118203.11 00:11:04.664 { 00:11:04.664 "results": [ 00:11:04.664 { 00:11:04.664 "job": "NVMe0n1", 00:11:04.664 "core_mask": "0x1", 00:11:04.664 "workload": "verify", 00:11:04.664 "status": "finished", 00:11:04.664 "verify_range": { 00:11:04.664 "start": 0, 00:11:04.664 "length": 16384 00:11:04.664 }, 00:11:04.664 "queue_depth": 1024, 00:11:04.664 "io_size": 4096, 00:11:04.664 "runtime": 10.073826, 00:11:04.664 "iops": 6428.739190055497, 00:11:04.664 "mibps": 25.112262461154284, 00:11:04.664 "io_failed": 0, 00:11:04.664 "io_timeout": 0, 00:11:04.664 "avg_latency_us": 158493.30298047958, 00:11:04.664 "min_latency_us": 11736.901818181817, 00:11:04.664 "max_latency_us": 118203.11272727273 00:11:04.664 } 00:11:04.664 ], 00:11:04.664 "core_count": 1 00:11:04.664 } 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68681 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68681 ']' 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68681 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68681 00:11:04.664 killing process with pid 68681 00:11:04.664 Received shutdown signal, test time was about 10.000000 seconds 00:11:04.664 00:11:04.664 Latency(us) 00:11:04.664 [2024-12-16T05:28:44.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.664 [2024-12-16T05:28:44.923Z] =================================================================================================================== 00:11:04.664 [2024-12-16T05:28:44.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68681' 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68681 00:11:04.664 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68681 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.232 rmmod nvme_tcp 00:11:05.232 rmmod nvme_fabrics 00:11:05.232 rmmod nvme_keyring 00:11:05.232 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68649 ']' 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68649 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68649 ']' 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68649 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68649 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:05.491 killing process with pid 68649 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68649' 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68649 00:11:05.491 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68649 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:06.427 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:06.686 ************************************ 00:11:06.686 END TEST nvmf_queue_depth 00:11:06.686 ************************************ 00:11:06.686 00:11:06.686 real 0m15.357s 00:11:06.686 user 0m25.674s 00:11:06.686 sys 0m2.453s 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.686 ************************************ 00:11:06.686 START TEST nvmf_target_multipath 00:11:06.686 ************************************ 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:06.686 * Looking for test storage... 00:11:06.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:06.686 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:06.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.947 --rc genhtml_branch_coverage=1 00:11:06.947 --rc genhtml_function_coverage=1 00:11:06.947 --rc genhtml_legend=1 00:11:06.947 --rc geninfo_all_blocks=1 00:11:06.947 --rc geninfo_unexecuted_blocks=1 00:11:06.947 00:11:06.947 ' 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:06.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.947 --rc genhtml_branch_coverage=1 00:11:06.947 --rc genhtml_function_coverage=1 00:11:06.947 --rc genhtml_legend=1 00:11:06.947 --rc geninfo_all_blocks=1 00:11:06.947 --rc geninfo_unexecuted_blocks=1 00:11:06.947 00:11:06.947 ' 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:06.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.947 --rc genhtml_branch_coverage=1 00:11:06.947 --rc genhtml_function_coverage=1 00:11:06.947 --rc genhtml_legend=1 00:11:06.947 --rc geninfo_all_blocks=1 00:11:06.947 --rc geninfo_unexecuted_blocks=1 00:11:06.947 00:11:06.947 ' 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:06.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.947 --rc genhtml_branch_coverage=1 00:11:06.947 --rc genhtml_function_coverage=1 00:11:06.947 --rc genhtml_legend=1 00:11:06.947 --rc geninfo_all_blocks=1 00:11:06.947 --rc geninfo_unexecuted_blocks=1 00:11:06.947 00:11:06.947 ' 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.947 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.947 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.947 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.947 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.947 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.947 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.947 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.948 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:06.948 Cannot find device "nvmf_init_br" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:06.948 Cannot find device "nvmf_init_br2" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:06.948 Cannot find device "nvmf_tgt_br" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.948 Cannot find device "nvmf_tgt_br2" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:06.948 Cannot find device "nvmf_init_br" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:06.948 Cannot find device "nvmf_init_br2" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:06.948 Cannot find device "nvmf_tgt_br" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:06.948 Cannot find device "nvmf_tgt_br2" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:06.948 Cannot find device "nvmf_br" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:06.948 Cannot find device "nvmf_init_if" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:06.948 Cannot find device "nvmf_init_if2" 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:06.948 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:07.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:11:07.208 00:11:07.208 --- 10.0.0.3 ping statistics --- 00:11:07.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.208 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:07.208 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:07.208 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:11:07.208 00:11:07.208 --- 10.0.0.4 ping statistics --- 00:11:07.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.208 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:07.208 00:11:07.208 --- 10.0.0.1 ping statistics --- 00:11:07.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.208 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:07.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:07.208 00:11:07.208 --- 10.0.0.2 ping statistics --- 00:11:07.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.208 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=69074 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 69074 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 69074 ']' 00:11:07.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.208 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:07.481 [2024-12-16 05:28:47.551726] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:07.481 [2024-12-16 05:28:47.552179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.753 [2024-12-16 05:28:47.744844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.753 [2024-12-16 05:28:47.878371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.753 [2024-12-16 05:28:47.878440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.753 [2024-12-16 05:28:47.878480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.753 [2024-12-16 05:28:47.878496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.753 [2024-12-16 05:28:47.878512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.753 [2024-12-16 05:28:47.880833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.753 [2024-12-16 05:28:47.880987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.753 [2024-12-16 05:28:47.881207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.753 [2024-12-16 05:28:47.881699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.011 [2024-12-16 05:28:48.077696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.269 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.269 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:08.269 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.269 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.269 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:08.528 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.528 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:08.786 [2024-12-16 05:28:48.832125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.786 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:09.045 Malloc0 00:11:09.045 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:09.303 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.562 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:09.820 [2024-12-16 05:28:49.913613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:09.820 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:10.078 [2024-12-16 05:28:50.153838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:10.078 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:10.078 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:10.337 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.337 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:10.337 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.337 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:10.337 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=69169 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:12.240 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:12.240 [global] 00:11:12.240 thread=1 00:11:12.240 invalidate=1 00:11:12.240 rw=randrw 00:11:12.240 time_based=1 00:11:12.240 runtime=6 00:11:12.240 ioengine=libaio 00:11:12.240 direct=1 00:11:12.240 bs=4096 00:11:12.240 iodepth=128 00:11:12.240 norandommap=0 00:11:12.240 numjobs=1 00:11:12.240 00:11:12.240 verify_dump=1 00:11:12.240 verify_backlog=512 00:11:12.240 verify_state_save=0 00:11:12.240 do_verify=1 00:11:12.240 verify=crc32c-intel 00:11:12.240 [job0] 00:11:12.240 filename=/dev/nvme0n1 00:11:12.499 Could not set queue depth (nvme0n1) 00:11:12.499 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:12.499 fio-3.35 00:11:12.499 Starting 1 thread 00:11:13.435 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:13.694 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:13.953 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:14.211 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:14.470 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:14.470 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:14.470 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.470 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:14.470 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:14.470 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:14.470 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:14.471 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:14.471 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:14.471 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:14.471 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:14.471 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:14.471 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 69169 00:11:18.659 00:11:18.659 job0: (groupid=0, jobs=1): err= 0: pid=69190: Mon Dec 16 05:28:58 2024 00:11:18.659 read: IOPS=8744, BW=34.2MiB/s (35.8MB/s)(205MiB/6009msec) 00:11:18.659 slat (usec): min=7, max=7245, avg=68.50, stdev=269.96 00:11:18.659 clat (usec): min=1923, max=17609, avg=10005.67, stdev=1703.62 00:11:18.659 lat (usec): min=1933, max=17621, avg=10074.17, stdev=1707.54 00:11:18.659 clat percentiles (usec): 00:11:18.659 | 1.00th=[ 5080], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9110], 00:11:18.659 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:11:18.659 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11469], 95.00th=[13829], 00:11:18.659 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16712], 99.95th=[16909], 00:11:18.659 | 99.99th=[17433] 00:11:18.659 bw ( KiB/s): min= 3968, max=21104, per=51.24%, avg=17924.58, stdev=4854.06, samples=12 00:11:18.659 iops : min= 992, max= 5276, avg=4481.08, stdev=1213.48, samples=12 00:11:18.659 write: IOPS=5136, BW=20.1MiB/s (21.0MB/s)(106MiB/5258msec); 0 zone resets 00:11:18.659 slat (usec): min=15, max=2142, avg=77.46, stdev=198.99 00:11:18.659 clat (usec): min=1728, max=17564, avg=8752.85, stdev=1598.05 00:11:18.659 lat (usec): min=1767, max=17592, avg=8830.31, stdev=1603.95 00:11:18.659 clat percentiles (usec): 00:11:18.659 | 1.00th=[ 3752], 5.00th=[ 5080], 10.00th=[ 6783], 20.00th=[ 8160], 00:11:18.659 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:11:18.659 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10421], 00:11:18.659 | 99.00th=[13435], 99.50th=[14353], 99.90th=[15926], 99.95th=[16319], 00:11:18.659 | 99.99th=[17433] 00:11:18.659 bw ( KiB/s): min= 4256, max=21205, per=87.40%, avg=17959.08, stdev=4754.29, samples=12 00:11:18.659 iops : min= 1064, max= 5301, avg=4489.75, stdev=1188.56, samples=12 00:11:18.659 lat (msec) : 2=0.01%, 4=0.65%, 10=66.28%, 20=33.05% 00:11:18.659 cpu : usr=5.39%, sys=19.19%, ctx=4579, majf=0, minf=90 00:11:18.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:18.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.659 issued rwts: total=52546,27009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.659 00:11:18.659 Run status group 0 (all jobs): 00:11:18.659 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=205MiB (215MB), run=6009-6009msec 00:11:18.659 WRITE: bw=20.1MiB/s (21.0MB/s), 20.1MiB/s-20.1MiB/s (21.0MB/s-21.0MB/s), io=106MiB (111MB), run=5258-5258msec 00:11:18.659 00:11:18.659 Disk stats (read/write): 00:11:18.659 nvme0n1: ios=51777/26509, merge=0/0, ticks=497505/218233, in_queue=715738, util=98.65% 00:11:18.659 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:18.917 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=69268 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:19.177 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:19.177 [global] 00:11:19.177 thread=1 00:11:19.177 invalidate=1 00:11:19.177 rw=randrw 00:11:19.177 time_based=1 00:11:19.177 runtime=6 00:11:19.177 ioengine=libaio 00:11:19.177 direct=1 00:11:19.177 bs=4096 00:11:19.177 iodepth=128 00:11:19.177 norandommap=0 00:11:19.177 numjobs=1 00:11:19.177 00:11:19.177 verify_dump=1 00:11:19.177 verify_backlog=512 00:11:19.177 verify_state_save=0 00:11:19.177 do_verify=1 00:11:19.177 verify=crc32c-intel 00:11:19.177 [job0] 00:11:19.177 filename=/dev/nvme0n1 00:11:19.177 Could not set queue depth (nvme0n1) 00:11:19.436 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:19.436 fio-3.35 00:11:19.436 Starting 1 thread 00:11:20.371 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:20.630 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:20.889 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:21.147 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:21.406 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 69268 00:11:25.644 00:11:25.644 job0: (groupid=0, jobs=1): err= 0: pid=69295: Mon Dec 16 05:29:05 2024 00:11:25.644 read: IOPS=9782, BW=38.2MiB/s (40.1MB/s)(230MiB/6009msec) 00:11:25.644 slat (usec): min=2, max=14238, avg=52.57, stdev=239.34 00:11:25.644 clat (usec): min=397, max=25847, avg=9072.53, stdev=2393.68 00:11:25.644 lat (usec): min=411, max=25859, avg=9125.11, stdev=2412.82 00:11:25.644 clat percentiles (usec): 00:11:25.644 | 1.00th=[ 3326], 5.00th=[ 4752], 10.00th=[ 5604], 20.00th=[ 7046], 00:11:25.644 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:11:25.644 | 70.00th=[10159], 80.00th=[10421], 90.00th=[11076], 95.00th=[12780], 00:11:25.644 | 99.00th=[15401], 99.50th=[15795], 99.90th=[20317], 99.95th=[22414], 00:11:25.644 | 99.99th=[23725] 00:11:25.644 bw ( KiB/s): min= 8032, max=35008, per=51.63%, avg=20203.33, stdev=8210.52, samples=12 00:11:25.644 iops : min= 2008, max= 8752, avg=5050.83, stdev=2052.63, samples=12 00:11:25.644 write: IOPS=5999, BW=23.4MiB/s (24.6MB/s)(119MiB/5080msec); 0 zone resets 00:11:25.644 slat (usec): min=4, max=1862, avg=61.53, stdev=165.50 00:11:25.644 clat (usec): min=1725, max=17314, avg=7519.14, stdev=2280.02 00:11:25.644 lat (usec): min=1775, max=17341, avg=7580.68, stdev=2301.61 00:11:25.644 clat percentiles (usec): 00:11:25.644 | 1.00th=[ 2933], 5.00th=[ 3720], 10.00th=[ 4228], 20.00th=[ 5014], 00:11:25.644 | 30.00th=[ 5800], 40.00th=[ 7504], 50.00th=[ 8356], 60.00th=[ 8717], 00:11:25.644 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10290], 00:11:25.644 | 99.00th=[12387], 99.50th=[13698], 99.90th=[15401], 99.95th=[16057], 00:11:25.644 | 99.99th=[17171] 00:11:25.644 bw ( KiB/s): min= 8192, max=34536, per=84.46%, avg=20268.67, stdev=8072.34, samples=12 00:11:25.644 iops : min= 2048, max= 8634, avg=5067.17, stdev=2018.08, samples=12 00:11:25.644 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:25.644 lat (msec) : 2=0.11%, 4=3.96%, 10=71.19%, 20=24.61%, 50=0.11% 00:11:25.644 cpu : usr=5.28%, sys=20.77%, ctx=5037, majf=0, minf=114 00:11:25.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:25.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.644 issued rwts: total=58782,30475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.644 00:11:25.644 Run status group 0 (all jobs): 00:11:25.644 READ: bw=38.2MiB/s (40.1MB/s), 38.2MiB/s-38.2MiB/s (40.1MB/s-40.1MB/s), io=230MiB (241MB), run=6009-6009msec 00:11:25.644 WRITE: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=119MiB (125MB), run=5080-5080msec 00:11:25.644 00:11:25.644 Disk stats (read/write): 00:11:25.644 nvme0n1: ios=58017/29958, merge=0/0, ticks=504171/210690, in_queue=714861, util=98.61% 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:25.644 05:29:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.903 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.903 rmmod nvme_tcp 00:11:25.903 rmmod nvme_fabrics 00:11:25.903 rmmod nvme_keyring 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 69074 ']' 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 69074 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 69074 ']' 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 69074 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.161 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69074 00:11:26.161 killing process with pid 69074 00:11:26.162 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.162 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.162 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69074' 00:11:26.162 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 69074 00:11:26.162 05:29:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 69074 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:27.097 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:27.357 ************************************ 00:11:27.357 END TEST nvmf_target_multipath 00:11:27.357 ************************************ 00:11:27.357 00:11:27.357 real 0m20.738s 00:11:27.357 user 1m15.444s 00:11:27.357 sys 0m9.877s 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.357 ************************************ 00:11:27.357 START TEST nvmf_zcopy 00:11:27.357 ************************************ 00:11:27.357 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:27.617 * Looking for test storage... 00:11:27.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:27.617 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.618 --rc genhtml_branch_coverage=1 00:11:27.618 --rc genhtml_function_coverage=1 00:11:27.618 --rc genhtml_legend=1 00:11:27.618 --rc geninfo_all_blocks=1 00:11:27.618 --rc geninfo_unexecuted_blocks=1 00:11:27.618 00:11:27.618 ' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.618 --rc genhtml_branch_coverage=1 00:11:27.618 --rc genhtml_function_coverage=1 00:11:27.618 --rc genhtml_legend=1 00:11:27.618 --rc geninfo_all_blocks=1 00:11:27.618 --rc geninfo_unexecuted_blocks=1 00:11:27.618 00:11:27.618 ' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.618 --rc genhtml_branch_coverage=1 00:11:27.618 --rc genhtml_function_coverage=1 00:11:27.618 --rc genhtml_legend=1 00:11:27.618 --rc geninfo_all_blocks=1 00:11:27.618 --rc geninfo_unexecuted_blocks=1 00:11:27.618 00:11:27.618 ' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.618 --rc genhtml_branch_coverage=1 00:11:27.618 --rc genhtml_function_coverage=1 00:11:27.618 --rc genhtml_legend=1 00:11:27.618 --rc geninfo_all_blocks=1 00:11:27.618 --rc geninfo_unexecuted_blocks=1 00:11:27.618 00:11:27.618 ' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.618 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:27.618 Cannot find device "nvmf_init_br" 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:27.618 Cannot find device "nvmf_init_br2" 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:27.618 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:27.619 Cannot find device "nvmf_tgt_br" 00:11:27.619 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:27.619 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.619 Cannot find device "nvmf_tgt_br2" 00:11:27.619 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:27.619 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:27.878 Cannot find device "nvmf_init_br" 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:27.878 Cannot find device "nvmf_init_br2" 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:27.878 Cannot find device "nvmf_tgt_br" 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:27.878 Cannot find device "nvmf_tgt_br2" 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:27.878 Cannot find device "nvmf_br" 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:27.878 Cannot find device "nvmf_init_if" 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:27.878 Cannot find device "nvmf_init_if2" 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.878 05:29:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:27.878 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:28.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:11:28.138 00:11:28.138 --- 10.0.0.3 ping statistics --- 00:11:28.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.138 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:28.138 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:28.138 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:11:28.138 00:11:28.138 --- 10.0.0.4 ping statistics --- 00:11:28.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.138 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:11:28.138 00:11:28.138 --- 10.0.0.1 ping statistics --- 00:11:28.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.138 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:28.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:28.138 00:11:28.138 --- 10.0.0.2 ping statistics --- 00:11:28.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.138 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69606 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69606 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69606 ']' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.138 05:29:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:28.138 [2024-12-16 05:29:08.319141] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:28.138 [2024-12-16 05:29:08.319302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.398 [2024-12-16 05:29:08.499055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.398 [2024-12-16 05:29:08.625286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.398 [2024-12-16 05:29:08.625390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.398 [2024-12-16 05:29:08.625428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.398 [2024-12-16 05:29:08.625456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.398 [2024-12-16 05:29:08.625473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.398 [2024-12-16 05:29:08.626966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.657 [2024-12-16 05:29:08.825303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.225 [2024-12-16 05:29:09.375280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:29.225 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.226 [2024-12-16 05:29:09.391473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.226 malloc0 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:29.226 { 00:11:29.226 "params": { 00:11:29.226 "name": "Nvme$subsystem", 00:11:29.226 "trtype": "$TEST_TRANSPORT", 00:11:29.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.226 "adrfam": "ipv4", 00:11:29.226 "trsvcid": "$NVMF_PORT", 00:11:29.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.226 "hdgst": ${hdgst:-false}, 00:11:29.226 "ddgst": ${ddgst:-false} 00:11:29.226 }, 00:11:29.226 "method": "bdev_nvme_attach_controller" 00:11:29.226 } 00:11:29.226 EOF 00:11:29.226 )") 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:29.226 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:29.226 "params": { 00:11:29.226 "name": "Nvme1", 00:11:29.226 "trtype": "tcp", 00:11:29.226 "traddr": "10.0.0.3", 00:11:29.226 "adrfam": "ipv4", 00:11:29.226 "trsvcid": "4420", 00:11:29.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.226 "hdgst": false, 00:11:29.226 "ddgst": false 00:11:29.226 }, 00:11:29.226 "method": "bdev_nvme_attach_controller" 00:11:29.226 }' 00:11:29.485 [2024-12-16 05:29:09.566491] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:29.485 [2024-12-16 05:29:09.566694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69639 ] 00:11:29.744 [2024-12-16 05:29:09.756013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.744 [2024-12-16 05:29:09.883140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.004 [2024-12-16 05:29:10.070298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:30.004 Running I/O for 10 seconds... 00:11:32.318 4770.00 IOPS, 37.27 MiB/s [2024-12-16T05:29:13.514Z] 4814.00 IOPS, 37.61 MiB/s [2024-12-16T05:29:14.451Z] 4864.00 IOPS, 38.00 MiB/s [2024-12-16T05:29:15.388Z] 4900.50 IOPS, 38.29 MiB/s [2024-12-16T05:29:16.325Z] 4922.60 IOPS, 38.46 MiB/s [2024-12-16T05:29:17.269Z] 4943.17 IOPS, 38.62 MiB/s [2024-12-16T05:29:18.648Z] 4945.71 IOPS, 38.64 MiB/s [2024-12-16T05:29:19.584Z] 4948.00 IOPS, 38.66 MiB/s [2024-12-16T05:29:20.521Z] 4957.89 IOPS, 38.73 MiB/s [2024-12-16T05:29:20.521Z] 4959.80 IOPS, 38.75 MiB/s 00:11:40.262 Latency(us) 00:11:40.262 [2024-12-16T05:29:20.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.262 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:40.262 Verification LBA range: start 0x0 length 0x1000 00:11:40.262 Nvme1n1 : 10.02 4962.91 38.77 0.00 0.00 25722.71 4110.89 34793.66 00:11:40.262 [2024-12-16T05:29:20.521Z] =================================================================================================================== 00:11:40.262 [2024-12-16T05:29:20.521Z] Total : 4962.91 38.77 0.00 0.00 25722.71 4110.89 34793.66 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69774 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:41.200 { 00:11:41.200 "params": { 00:11:41.200 "name": "Nvme$subsystem", 00:11:41.200 "trtype": "$TEST_TRANSPORT", 00:11:41.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:41.200 "adrfam": "ipv4", 00:11:41.200 "trsvcid": "$NVMF_PORT", 00:11:41.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:41.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:41.200 "hdgst": ${hdgst:-false}, 00:11:41.200 "ddgst": ${ddgst:-false} 00:11:41.200 }, 00:11:41.200 "method": "bdev_nvme_attach_controller" 00:11:41.200 } 00:11:41.200 EOF 00:11:41.200 )") 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:41.200 [2024-12-16 05:29:21.172085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.200 [2024-12-16 05:29:21.172144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:41.200 05:29:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:41.200 "params": { 00:11:41.200 "name": "Nvme1", 00:11:41.200 "trtype": "tcp", 00:11:41.200 "traddr": "10.0.0.3", 00:11:41.200 "adrfam": "ipv4", 00:11:41.200 "trsvcid": "4420", 00:11:41.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:41.200 "hdgst": false, 00:11:41.200 "ddgst": false 00:11:41.200 }, 00:11:41.200 "method": "bdev_nvme_attach_controller" 00:11:41.200 }' 00:11:41.200 [2024-12-16 05:29:21.183992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.200 [2024-12-16 05:29:21.184048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.200 [2024-12-16 05:29:21.196021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.200 [2024-12-16 05:29:21.196066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.200 [2024-12-16 05:29:21.204002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.200 [2024-12-16 05:29:21.204055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.200 [2024-12-16 05:29:21.212033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.212078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.224045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.224096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.232017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.232060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.240042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.240090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.248082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.248127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.256022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.256070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.268052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.268105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.276042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.276088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.279620] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:41.201 [2024-12-16 05:29:21.279784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69774 ] 00:11:41.201 [2024-12-16 05:29:21.284064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.284108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.292061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.292107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.300063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.300106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.312079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.312156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.320085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.320161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.328068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.328142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.336079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.336151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.344128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.344252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.356239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.356325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.364088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.364162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.372073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.372145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.380141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.380215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.388094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.388166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.400092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.400168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.408102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.408159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.416097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.416187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.424120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.424164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.432150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.432269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.444197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.444288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.201 [2024-12-16 05:29:21.452145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.201 [2024-12-16 05:29:21.452204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.462077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.463 [2024-12-16 05:29:21.464205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.464262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.476189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.476302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.488185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.488234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.500201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.500279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.512147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.512205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.524163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.524239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.536248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.536338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.548273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.548371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.558115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.463 [2024-12-16 05:29:21.560170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.560227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.572203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.572306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.584215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.584273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.596201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.596259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.604235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.604304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.616226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.616299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.624250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.624326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.636312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.636386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.648257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.648310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.656229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.656302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.668263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.668316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.676227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.676300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.688267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.688320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.696272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.696344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.704217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.704285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.463 [2024-12-16 05:29:21.712345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.463 [2024-12-16 05:29:21.712402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.724294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.724379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.728675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:41.723 [2024-12-16 05:29:21.732299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.732357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.744374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.744457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.756335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.756417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.764300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.764353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.772273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.772346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.780260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.780312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.788309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.788366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.796298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.796351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.808279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.808335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.820402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.820482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.832349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.832431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.840361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.840439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.852366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.852420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.860343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.860401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.868363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.868418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.876393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.876471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.884393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.884449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.892421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.892479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.904421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.904478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.912447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.912504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 Running I/O for 5 seconds... 00:11:41.723 [2024-12-16 05:29:21.925681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.925759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.941362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.941420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.957778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.957855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.723 [2024-12-16 05:29:21.973744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.723 [2024-12-16 05:29:21.973820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:21.987558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:21.987697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.003413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.003472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.019417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.019530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.032794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.032865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.052229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.052323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.068529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.068585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.083981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.084081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.095172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.095230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.109719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.109780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.128396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.128487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.143989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.144064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.155524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.155582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.172368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.172432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.189581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.189670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.202786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.202865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.217221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.217279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.983 [2024-12-16 05:29:22.230931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.983 [2024-12-16 05:29:22.231023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.248338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.248397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.264338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.264399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.281355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.281412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.293249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.293310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.305022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.305080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.317910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.318007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.334056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.334115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.349711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.349802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.361252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.361315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.374057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.374119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.389282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.389339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.401157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.401218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.416968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.417042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.430035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.430099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.446466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.446524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.459141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.459218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.476444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.476502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.243 [2024-12-16 05:29:22.491623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.243 [2024-12-16 05:29:22.491737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.505097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.505188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.522160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.522236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.537336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.537393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.554241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.554321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.565625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.565696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.579545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.579648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.595405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.595465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.607549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.607691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.626298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.626357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.641986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.642051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.656798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.656874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.673149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.673215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.690557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.690633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.707222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.707302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.723328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.723387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.735331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.735426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.503 [2024-12-16 05:29:22.749071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.503 [2024-12-16 05:29:22.749129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.766159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.766236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.782929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.782987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.799258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.799338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.815375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.815434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.831933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.832003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.844903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.844995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.865280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.865345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.880453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.880513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.897853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.897907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.912659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.912716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 9209.00 IOPS, 71.95 MiB/s [2024-12-16T05:29:23.022Z] [2024-12-16 05:29:22.930247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.930320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.943233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.943292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.961533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.961643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.974990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.975049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:22.988800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:22.988882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:23.005837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:23.005900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.763 [2024-12-16 05:29:23.020427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.763 [2024-12-16 05:29:23.020476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.022 [2024-12-16 05:29:23.038135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.022 [2024-12-16 05:29:23.038192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.022 [2024-12-16 05:29:23.050669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.022 [2024-12-16 05:29:23.050749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.022 [2024-12-16 05:29:23.068712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.022 [2024-12-16 05:29:23.068761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.022 [2024-12-16 05:29:23.085745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.022 [2024-12-16 05:29:23.085799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.022 [2024-12-16 05:29:23.098472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.022 [2024-12-16 05:29:23.098529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.022 [2024-12-16 05:29:23.114058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.022 [2024-12-16 05:29:23.114137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.127743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.127801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.144202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.144275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.159107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.159184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.171390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.171447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.189047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.189104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.201221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.201278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.219141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.219199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.233862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.233921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.245381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.245438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.259730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.259788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.023 [2024-12-16 05:29:23.276425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.023 [2024-12-16 05:29:23.276501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.290545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.290642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.307729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.307788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.323385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.323445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.335244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.335304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.348732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.348817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.365968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.366049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.381557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.381631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.394775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.394834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.412613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.412684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.427697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.427755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.439739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.439805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.457255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.457314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.473730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.473805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.489526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.489584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.501231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.501290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.512949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.513008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.282 [2024-12-16 05:29:23.525718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.282 [2024-12-16 05:29:23.525778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.542230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.542292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.558245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.558308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.570142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.570200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.585501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.585560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.598463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.598521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.614906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.614950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.630742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.630786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.642255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.642313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.656412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.656471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.669658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.669739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.683674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.683718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.699272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.699332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.714700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.714760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.726931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.727004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.746319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.746386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.763300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.763363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.775943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.776005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.542 [2024-12-16 05:29:23.794105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.542 [2024-12-16 05:29:23.794165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.807783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.807830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.825095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.825156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.841379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.841439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.858240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.858301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.875227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.875286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.888171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.888232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.905988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.906050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 9206.00 IOPS, 71.92 MiB/s [2024-12-16T05:29:24.061Z] [2024-12-16 05:29:23.918593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.918703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.936127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.936238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.952752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.952828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.965843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.965915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.983516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.983574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:23.999086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:23.999145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:24.010490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:24.010549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:24.025852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:24.025897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:24.043815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:24.043872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.802 [2024-12-16 05:29:24.057643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.802 [2024-12-16 05:29:24.057705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.073404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.073464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.091637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.091703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.107982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.108032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.120244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.120319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.131910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.131957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.148470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.148530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.163988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.164052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.175711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.175770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.191086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.191133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.208691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.208750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.222452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.222512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.236843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.236903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.250627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.250697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.267049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.267125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.282272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.282331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.293817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.293877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.062 [2024-12-16 05:29:24.309527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.062 [2024-12-16 05:29:24.309615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.323002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.323063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.339136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.339197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.354513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.354573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.366508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.366566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.386160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.386219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.402171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.402231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.417897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.417960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.431195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.431255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.446863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.446926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.462689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.462750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.480371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.480430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.497321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.497381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.513494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.513554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.530021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.530079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.541739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.541798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.556407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.556465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.322 [2024-12-16 05:29:24.570710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.322 [2024-12-16 05:29:24.570769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.581 [2024-12-16 05:29:24.587556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.581 [2024-12-16 05:29:24.587646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.581 [2024-12-16 05:29:24.602973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.581 [2024-12-16 05:29:24.603032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.581 [2024-12-16 05:29:24.614147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.581 [2024-12-16 05:29:24.614205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.581 [2024-12-16 05:29:24.628629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.581 [2024-12-16 05:29:24.628723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.581 [2024-12-16 05:29:24.641783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.581 [2024-12-16 05:29:24.641841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.581 [2024-12-16 05:29:24.657945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.581 [2024-12-16 05:29:24.658004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.581 [2024-12-16 05:29:24.670813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.581 [2024-12-16 05:29:24.670871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.687509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.687568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.703632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.703703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.718840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.718900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.731060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.731118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.745431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.745490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.761381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.761438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.777218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.777276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.790054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.790112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.807308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.807366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.582 [2024-12-16 05:29:24.823162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.582 [2024-12-16 05:29:24.823220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.840400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.840459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.856941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.857001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.868446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.868520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.885992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.886086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.903320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.903391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 9192.00 IOPS, 71.81 MiB/s [2024-12-16T05:29:25.100Z] [2024-12-16 05:29:24.919054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.919115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.931498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.931558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.949561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.949634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.965009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.965071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.981835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.981896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:24.998176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:24.998235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:25.010442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:25.010502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:25.022793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:25.022851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:25.035232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:25.035290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:25.051499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:25.051543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:25.065532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:25.065576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:25.079743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:25.079789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.841 [2024-12-16 05:29:25.096016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.841 [2024-12-16 05:29:25.096078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.108599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.108676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.128752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.128815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.145387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.145447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.159102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.159161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.173361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.173403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.186864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.186923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.204725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.204784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.217771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.217830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.233662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.233720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.249902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.249979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.266576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.266662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.278549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.278650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.291060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.291118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.304185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.304260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.317307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.317365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.330824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.330883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.100 [2024-12-16 05:29:25.348469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.100 [2024-12-16 05:29:25.348527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.361620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.361727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.375018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.375076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.391014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.391073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.406545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.406647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.418701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.418760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.436507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.436565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.452281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.452339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.464137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.464214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.478731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.478790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.496385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.496443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.508277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.508335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.525925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.525971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.539958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.540007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.557680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.557768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.573375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.573434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.359 [2024-12-16 05:29:25.590173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.359 [2024-12-16 05:29:25.590233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.360 [2024-12-16 05:29:25.607406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.360 [2024-12-16 05:29:25.607467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.619947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.619989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.638461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.638523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.651440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.651499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.667002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.667061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.682526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.682585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.698567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.698636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.715344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.715403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.732033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.732094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.749389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.749448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.760937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.760995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.775756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.775815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.793108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.793167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.805119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.805177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.822573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.822660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.838676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.838735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.856391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.856452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.619 [2024-12-16 05:29:25.868080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.619 [2024-12-16 05:29:25.868157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.878 [2024-12-16 05:29:25.881422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.878 [2024-12-16 05:29:25.881480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:25.897513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.897572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:25.909385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.909444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 9223.75 IOPS, 72.06 MiB/s [2024-12-16T05:29:26.138Z] [2024-12-16 05:29:25.924629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.924701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:25.941283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.941343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:25.957702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.957760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:25.974256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.974314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:25.985768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.985826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:25.999253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:25.999312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.015703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.015763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.029388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.029449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.047121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.047182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.063259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.063318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.075698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.075760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.095196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.095275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.109118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.109164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.879 [2024-12-16 05:29:26.128361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.879 [2024-12-16 05:29:26.128422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.142960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.143038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.161468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.161532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.175381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.175440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.193345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.193405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.209805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.209850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.226024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.226087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.238124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.238217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.252806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.252865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.138 [2024-12-16 05:29:26.270567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.138 [2024-12-16 05:29:26.270637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.285804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.285863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.300583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.300678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.311255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.311313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.325817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.325865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.344157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.344203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.360314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.360373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.372834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.372897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.139 [2024-12-16 05:29:26.390428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.139 [2024-12-16 05:29:26.390507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.407245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.407303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.424625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.424695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.441005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.441079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.452458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.452545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.469621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.469689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.485382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.485440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.502457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.502516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.398 [2024-12-16 05:29:26.518077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.398 [2024-12-16 05:29:26.518135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.529691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.529750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.544259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.544317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.561524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.561582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.573992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.574050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.591213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.591272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.606525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.606645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.617954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.618028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.632085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.632161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.399 [2024-12-16 05:29:26.648719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.399 [2024-12-16 05:29:26.648808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.662857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.662932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.678193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.678253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.696058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.696120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.708979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.709039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.723585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.723674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.740550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.740636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.752651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.752747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.771985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.772049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.787971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.788031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.804088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.804149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.815437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.815496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.829719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.829777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.843164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.843224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.860304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.860362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.876545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.876647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.888554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.888641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.658 [2024-12-16 05:29:26.907316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.658 [2024-12-16 05:29:26.907374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 9215.20 IOPS, 71.99 MiB/s [2024-12-16T05:29:27.177Z] [2024-12-16 05:29:26.920037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.920101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:26.929105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.929162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 00:11:46.918 Latency(us) 00:11:46.918 [2024-12-16T05:29:27.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.918 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:46.918 Nvme1n1 : 5.01 9214.36 71.99 0.00 0.00 13868.16 5004.57 25261.15 00:11:46.918 [2024-12-16T05:29:27.177Z] =================================================================================================================== 00:11:46.918 [2024-12-16T05:29:27.177Z] Total : 9214.36 71.99 0.00 0.00 13868.16 5004.57 25261.15 00:11:46.918 [2024-12-16 05:29:26.935652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.935708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:26.943636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.943707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:26.955658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.955728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:26.963677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.963732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:26.971644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.971698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:26.983730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.983807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:26.995727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:26.995808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.003670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.003724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.015656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.015710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.023668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.023721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.031655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.031692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.043681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.043736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.055793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.055894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.067680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.067734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.079692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.079746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.087675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.087729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.099689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.099756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.107766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.107810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.115726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.115770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.123805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.123899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.135806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.135911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.143697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.143752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.151789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.151831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.159737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.159781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.918 [2024-12-16 05:29:27.171756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.918 [2024-12-16 05:29:27.171801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.183774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.183831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.195721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.195778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.207801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.207898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.219739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.219795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.231717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.231773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.243759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.243815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.255918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.256003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.267725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.267781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.279767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.279822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.291746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.291802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.303928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.178 [2024-12-16 05:29:27.304004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.178 [2024-12-16 05:29:27.311774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.311829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.319751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.319805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.327782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.327837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.335788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.335842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.343779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.343832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.351797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.351873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.359817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.359881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.367821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.367902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.375838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.375916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.383842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.383928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.395878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.395951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.403815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.403897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.415798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.415874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.423826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.423904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.179 [2024-12-16 05:29:27.435900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.179 [2024-12-16 05:29:27.435958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.443829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.443908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.451829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.451907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.459816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.459895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.467843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.467923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.475901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.475958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.487923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.487977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.495932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.495975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.503917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.503963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.515960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.516006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.527965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.528012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.539961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.540010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.547980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.548025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.559966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.560011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.572039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.438 [2024-12-16 05:29:27.572109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.438 [2024-12-16 05:29:27.583979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.584032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.595950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.595994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.608019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.608077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.615995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.616038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.627968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.628025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.636027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.636070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.648017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.648077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.655985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.656044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.668099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.668203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.675994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.676037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 [2024-12-16 05:29:27.684041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.439 [2024-12-16 05:29:27.684084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.439 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69774) - No such process 00:11:47.439 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69774 00:11:47.439 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.439 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.439 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.699 delay0 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.699 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:47.958 [2024-12-16 05:29:27.964883] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:54.549 Initializing NVMe Controllers 00:11:54.549 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.549 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:54.549 Initialization complete. Launching workers. 00:11:54.549 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 95 00:11:54.549 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 33 00:11:54.549 success 265, unsuccessful 117, failed 0 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.549 rmmod nvme_tcp 00:11:54.549 rmmod nvme_fabrics 00:11:54.549 rmmod nvme_keyring 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69606 ']' 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69606 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69606 ']' 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69606 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69606 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:54.549 killing process with pid 69606 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69606' 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69606 00:11:54.549 05:29:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69606 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:55.116 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:55.374 00:11:55.374 real 0m27.833s 00:11:55.374 user 0m45.294s 00:11:55.374 sys 0m7.031s 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.374 ************************************ 00:11:55.374 END TEST nvmf_zcopy 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.374 ************************************ 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:55.374 ************************************ 00:11:55.374 START TEST nvmf_nmic 00:11:55.374 ************************************ 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:55.374 * Looking for test storage... 00:11:55.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.374 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:55.633 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.634 --rc genhtml_branch_coverage=1 00:11:55.634 --rc genhtml_function_coverage=1 00:11:55.634 --rc genhtml_legend=1 00:11:55.634 --rc geninfo_all_blocks=1 00:11:55.634 --rc geninfo_unexecuted_blocks=1 00:11:55.634 00:11:55.634 ' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.634 --rc genhtml_branch_coverage=1 00:11:55.634 --rc genhtml_function_coverage=1 00:11:55.634 --rc genhtml_legend=1 00:11:55.634 --rc geninfo_all_blocks=1 00:11:55.634 --rc geninfo_unexecuted_blocks=1 00:11:55.634 00:11:55.634 ' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.634 --rc genhtml_branch_coverage=1 00:11:55.634 --rc genhtml_function_coverage=1 00:11:55.634 --rc genhtml_legend=1 00:11:55.634 --rc geninfo_all_blocks=1 00:11:55.634 --rc geninfo_unexecuted_blocks=1 00:11:55.634 00:11:55.634 ' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.634 --rc genhtml_branch_coverage=1 00:11:55.634 --rc genhtml_function_coverage=1 00:11:55.634 --rc genhtml_legend=1 00:11:55.634 --rc geninfo_all_blocks=1 00:11:55.634 --rc geninfo_unexecuted_blocks=1 00:11:55.634 00:11:55.634 ' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.634 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:55.635 Cannot find device "nvmf_init_br" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:55.635 Cannot find device "nvmf_init_br2" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:55.635 Cannot find device "nvmf_tgt_br" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.635 Cannot find device "nvmf_tgt_br2" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:55.635 Cannot find device "nvmf_init_br" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:55.635 Cannot find device "nvmf_init_br2" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:55.635 Cannot find device "nvmf_tgt_br" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:55.635 Cannot find device "nvmf_tgt_br2" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:55.635 Cannot find device "nvmf_br" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:55.635 Cannot find device "nvmf_init_if" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:55.635 Cannot find device "nvmf_init_if2" 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.635 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.894 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.894 05:29:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:55.894 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:56.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:56.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:56.153 00:11:56.153 --- 10.0.0.3 ping statistics --- 00:11:56.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.153 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:56.153 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:56.153 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:11:56.153 00:11:56.153 --- 10.0.0.4 ping statistics --- 00:11:56.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.153 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:56.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:11:56.153 00:11:56.153 --- 10.0.0.1 ping statistics --- 00:11:56.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.153 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:56.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:56.153 00:11:56.153 --- 10.0.0.2 ping statistics --- 00:11:56.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.153 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=70163 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 70163 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 70163 ']' 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.153 05:29:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.153 [2024-12-16 05:29:36.323943] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:56.153 [2024-12-16 05:29:36.324108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.412 [2024-12-16 05:29:36.514829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.412 [2024-12-16 05:29:36.644609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.412 [2024-12-16 05:29:36.644678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.412 [2024-12-16 05:29:36.644701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.412 [2024-12-16 05:29:36.644716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.412 [2024-12-16 05:29:36.644732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.413 [2024-12-16 05:29:36.646961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.413 [2024-12-16 05:29:36.647112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.413 [2024-12-16 05:29:36.647178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.413 [2024-12-16 05:29:36.647465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.671 [2024-12-16 05:29:36.879055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 [2024-12-16 05:29:37.378525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 Malloc0 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.237 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.238 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.238 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.238 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.238 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:57.238 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.238 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 [2024-12-16 05:29:37.496461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:57.496 test case1: single bdev can't be used in multiple subsystems 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 [2024-12-16 05:29:37.520215] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:57.496 [2024-12-16 05:29:37.520267] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:57.496 [2024-12-16 05:29:37.520289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.496 request: 00:11:57.496 { 00:11:57.496 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:57.496 "namespace": { 00:11:57.496 "bdev_name": "Malloc0", 00:11:57.496 "no_auto_visible": false, 00:11:57.496 "hide_metadata": false 00:11:57.496 }, 00:11:57.496 "method": "nvmf_subsystem_add_ns", 00:11:57.496 "req_id": 1 00:11:57.496 } 00:11:57.496 Got JSON-RPC error response 00:11:57.496 response: 00:11:57.496 { 00:11:57.496 "code": -32602, 00:11:57.496 "message": "Invalid parameters" 00:11:57.496 } 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:57.496 Adding namespace failed - expected result. 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:57.496 test case2: host connect to nvmf target in multiple paths 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 [2024-12-16 05:29:37.532373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:57.496 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:57.754 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.754 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.754 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.754 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.754 05:29:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.658 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.658 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.658 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.658 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.658 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.658 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:59.658 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:59.658 [global] 00:11:59.658 thread=1 00:11:59.658 invalidate=1 00:11:59.658 rw=write 00:11:59.658 time_based=1 00:11:59.658 runtime=1 00:11:59.658 ioengine=libaio 00:11:59.658 direct=1 00:11:59.658 bs=4096 00:11:59.658 iodepth=1 00:11:59.658 norandommap=0 00:11:59.658 numjobs=1 00:11:59.658 00:11:59.658 verify_dump=1 00:11:59.658 verify_backlog=512 00:11:59.658 verify_state_save=0 00:11:59.658 do_verify=1 00:11:59.658 verify=crc32c-intel 00:11:59.658 [job0] 00:11:59.658 filename=/dev/nvme0n1 00:11:59.658 Could not set queue depth (nvme0n1) 00:11:59.916 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:59.916 fio-3.35 00:11:59.916 Starting 1 thread 00:12:01.295 00:12:01.295 job0: (groupid=0, jobs=1): err= 0: pid=70255: Mon Dec 16 05:29:41 2024 00:12:01.295 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:01.295 slat (nsec): min=11510, max=62269, avg=14626.61, stdev=4969.22 00:12:01.295 clat (usec): min=159, max=770, avg=210.23, stdev=32.77 00:12:01.295 lat (usec): min=175, max=793, avg=224.86, stdev=33.21 00:12:01.295 clat percentiles (usec): 00:12:01.295 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:12:01.295 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:12:01.295 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 251], 00:12:01.295 | 99.00th=[ 273], 99.50th=[ 318], 99.90th=[ 660], 99.95th=[ 660], 00:12:01.295 | 99.99th=[ 766] 00:12:01.295 write: IOPS=2674, BW=10.4MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:12:01.295 slat (usec): min=16, max=103, avg=21.43, stdev= 7.34 00:12:01.295 clat (usec): min=106, max=701, avg=134.06, stdev=27.55 00:12:01.295 lat (usec): min=124, max=724, avg=155.49, stdev=29.45 00:12:01.295 clat percentiles (usec): 00:12:01.295 | 1.00th=[ 110], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 119], 00:12:01.295 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 133], 00:12:01.295 | 70.00th=[ 141], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 169], 00:12:01.295 | 99.00th=[ 194], 99.50th=[ 208], 99.90th=[ 529], 99.95th=[ 545], 00:12:01.295 | 99.99th=[ 701] 00:12:01.295 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:12:01.295 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:12:01.295 lat (usec) : 250=97.33%, 500=2.46%, 750=0.19%, 1000=0.02% 00:12:01.295 cpu : usr=1.40%, sys=8.00%, ctx=5237, majf=0, minf=5 00:12:01.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:01.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:01.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:01.295 issued rwts: total=2560,2677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:01.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:01.295 00:12:01.295 Run status group 0 (all jobs): 00:12:01.295 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:01.295 WRITE: bw=10.4MiB/s (11.0MB/s), 10.4MiB/s-10.4MiB/s (11.0MB/s-11.0MB/s), io=10.5MiB (11.0MB), run=1001-1001msec 00:12:01.295 00:12:01.296 Disk stats (read/write): 00:12:01.296 nvme0n1: ios=2229/2560, merge=0/0, ticks=493/371, in_queue=864, util=91.48% 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.296 rmmod nvme_tcp 00:12:01.296 rmmod nvme_fabrics 00:12:01.296 rmmod nvme_keyring 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 70163 ']' 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 70163 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 70163 ']' 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 70163 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70163 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70163' 00:12:01.296 killing process with pid 70163 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 70163 00:12:01.296 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 70163 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:02.233 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:02.492 00:12:02.492 real 0m7.170s 00:12:02.492 user 0m21.353s 00:12:02.492 sys 0m2.405s 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.492 ************************************ 00:12:02.492 END TEST nvmf_nmic 00:12:02.492 ************************************ 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:02.492 ************************************ 00:12:02.492 START TEST nvmf_fio_target 00:12:02.492 ************************************ 00:12:02.492 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:02.752 * Looking for test storage... 00:12:02.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.752 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.753 --rc genhtml_branch_coverage=1 00:12:02.753 --rc genhtml_function_coverage=1 00:12:02.753 --rc genhtml_legend=1 00:12:02.753 --rc geninfo_all_blocks=1 00:12:02.753 --rc geninfo_unexecuted_blocks=1 00:12:02.753 00:12:02.753 ' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.753 --rc genhtml_branch_coverage=1 00:12:02.753 --rc genhtml_function_coverage=1 00:12:02.753 --rc genhtml_legend=1 00:12:02.753 --rc geninfo_all_blocks=1 00:12:02.753 --rc geninfo_unexecuted_blocks=1 00:12:02.753 00:12:02.753 ' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.753 --rc genhtml_branch_coverage=1 00:12:02.753 --rc genhtml_function_coverage=1 00:12:02.753 --rc genhtml_legend=1 00:12:02.753 --rc geninfo_all_blocks=1 00:12:02.753 --rc geninfo_unexecuted_blocks=1 00:12:02.753 00:12:02.753 ' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.753 --rc genhtml_branch_coverage=1 00:12:02.753 --rc genhtml_function_coverage=1 00:12:02.753 --rc genhtml_legend=1 00:12:02.753 --rc geninfo_all_blocks=1 00:12:02.753 --rc geninfo_unexecuted_blocks=1 00:12:02.753 00:12:02.753 ' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.753 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:02.753 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:02.754 Cannot find device "nvmf_init_br" 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:02.754 Cannot find device "nvmf_init_br2" 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:02.754 Cannot find device "nvmf_tgt_br" 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.754 Cannot find device "nvmf_tgt_br2" 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:02.754 Cannot find device "nvmf_init_br" 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:02.754 Cannot find device "nvmf_init_br2" 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:02.754 Cannot find device "nvmf_tgt_br" 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:02.754 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:02.754 Cannot find device "nvmf_tgt_br2" 00:12:02.754 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:02.754 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:03.013 Cannot find device "nvmf_br" 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:03.013 Cannot find device "nvmf_init_if" 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:03.013 Cannot find device "nvmf_init_if2" 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:03.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:03.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:03.013 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:03.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:03.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:03.014 00:12:03.014 --- 10.0.0.3 ping statistics --- 00:12:03.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.014 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:03.014 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:03.273 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:03.273 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:12:03.273 00:12:03.273 --- 10.0.0.4 ping statistics --- 00:12:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.273 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:03.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:03.273 00:12:03.273 --- 10.0.0.1 ping statistics --- 00:12:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.273 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:03.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:12:03.273 00:12:03.273 --- 10.0.0.2 ping statistics --- 00:12:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.273 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.273 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70496 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70496 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 70496 ']' 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.274 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.274 [2024-12-16 05:29:43.438068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:03.274 [2024-12-16 05:29:43.438241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.533 [2024-12-16 05:29:43.622296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.533 [2024-12-16 05:29:43.710432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.533 [2024-12-16 05:29:43.710772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.533 [2024-12-16 05:29:43.710978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.533 [2024-12-16 05:29:43.711119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.533 [2024-12-16 05:29:43.711317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.533 [2024-12-16 05:29:43.713147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.533 [2024-12-16 05:29:43.713251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.533 [2024-12-16 05:29:43.713820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.533 [2024-12-16 05:29:43.713836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.792 [2024-12-16 05:29:43.877498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:04.360 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.360 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:04.360 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.360 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.360 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.360 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.360 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:04.619 [2024-12-16 05:29:44.712577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.619 05:29:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.878 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:04.878 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:05.445 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:05.445 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:05.703 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:05.703 05:29:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:05.962 05:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:05.962 05:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:06.220 05:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:06.789 05:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:06.789 05:29:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.048 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:07.048 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.319 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:07.319 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:07.630 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.889 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:07.889 05:29:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.147 05:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:08.147 05:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.406 05:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:08.406 [2024-12-16 05:29:48.662221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:08.664 05:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:08.923 05:29:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:09.183 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:09.183 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:09.183 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:09.183 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.183 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:09.183 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:09.183 05:29:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:11.718 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:11.718 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:11.718 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.718 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:11.718 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.718 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:11.718 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:11.718 [global] 00:12:11.718 thread=1 00:12:11.718 invalidate=1 00:12:11.718 rw=write 00:12:11.718 time_based=1 00:12:11.718 runtime=1 00:12:11.718 ioengine=libaio 00:12:11.718 direct=1 00:12:11.718 bs=4096 00:12:11.718 iodepth=1 00:12:11.718 norandommap=0 00:12:11.718 numjobs=1 00:12:11.718 00:12:11.718 verify_dump=1 00:12:11.718 verify_backlog=512 00:12:11.718 verify_state_save=0 00:12:11.718 do_verify=1 00:12:11.718 verify=crc32c-intel 00:12:11.718 [job0] 00:12:11.718 filename=/dev/nvme0n1 00:12:11.718 [job1] 00:12:11.718 filename=/dev/nvme0n2 00:12:11.718 [job2] 00:12:11.718 filename=/dev/nvme0n3 00:12:11.718 [job3] 00:12:11.718 filename=/dev/nvme0n4 00:12:11.718 Could not set queue depth (nvme0n1) 00:12:11.718 Could not set queue depth (nvme0n2) 00:12:11.718 Could not set queue depth (nvme0n3) 00:12:11.718 Could not set queue depth (nvme0n4) 00:12:11.718 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.718 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.718 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.718 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.718 fio-3.35 00:12:11.718 Starting 4 threads 00:12:12.655 00:12:12.656 job0: (groupid=0, jobs=1): err= 0: pid=70691: Mon Dec 16 05:29:52 2024 00:12:12.656 read: IOPS=1661, BW=6645KiB/s (6805kB/s)(6652KiB/1001msec) 00:12:12.656 slat (nsec): min=8865, max=53859, avg=15137.91, stdev=4303.69 00:12:12.656 clat (usec): min=205, max=415, avg=286.49, stdev=26.03 00:12:12.656 lat (usec): min=220, max=428, avg=301.63, stdev=26.27 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:12:12.656 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:12:12.656 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 330], 00:12:12.656 | 99.00th=[ 367], 99.50th=[ 371], 99.90th=[ 408], 99.95th=[ 416], 00:12:12.656 | 99.99th=[ 416] 00:12:12.656 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:12.656 slat (usec): min=12, max=122, avg=20.46, stdev= 6.47 00:12:12.656 clat (usec): min=165, max=320, avg=220.06, stdev=26.45 00:12:12.656 lat (usec): min=183, max=370, avg=240.52, stdev=27.22 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:12:12.656 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:12:12.656 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 269], 00:12:12.656 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 306], 99.95th=[ 318], 00:12:12.656 | 99.99th=[ 322] 00:12:12.656 bw ( KiB/s): min= 8175, max= 8175, per=28.54%, avg=8175.00, stdev= 0.00, samples=1 00:12:12.656 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:12.656 lat (usec) : 250=49.74%, 500=50.26% 00:12:12.656 cpu : usr=1.30%, sys=6.00%, ctx=3711, majf=0, minf=15 00:12:12.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 issued rwts: total=1663,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.656 job1: (groupid=0, jobs=1): err= 0: pid=70692: Mon Dec 16 05:29:52 2024 00:12:12.656 read: IOPS=1661, BW=6645KiB/s (6805kB/s)(6652KiB/1001msec) 00:12:12.656 slat (usec): min=8, max=135, avg=11.76, stdev= 4.77 00:12:12.656 clat (usec): min=177, max=424, avg=290.00, stdev=26.18 00:12:12.656 lat (usec): min=232, max=434, avg=301.76, stdev=26.37 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:12:12.656 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:12:12.656 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 338], 00:12:12.656 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 404], 99.95th=[ 424], 00:12:12.656 | 99.99th=[ 424] 00:12:12.656 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:12.656 slat (nsec): min=11111, max=71495, avg=19063.05, stdev=7091.01 00:12:12.656 clat (usec): min=169, max=397, avg=221.59, stdev=26.07 00:12:12.656 lat (usec): min=183, max=426, avg=240.66, stdev=27.57 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:12:12.656 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:12:12.656 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 269], 00:12:12.656 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 355], 00:12:12.656 | 99.99th=[ 400] 00:12:12.656 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:12:12.656 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:12.656 lat (usec) : 250=48.59%, 500=51.41% 00:12:12.656 cpu : usr=1.00%, sys=5.20%, ctx=3712, majf=0, minf=9 00:12:12.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 issued rwts: total=1663,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.656 job2: (groupid=0, jobs=1): err= 0: pid=70693: Mon Dec 16 05:29:52 2024 00:12:12.656 read: IOPS=1203, BW=4815KiB/s (4931kB/s)(4820KiB/1001msec) 00:12:12.656 slat (nsec): min=15184, max=80175, avg=26163.34, stdev=9377.22 00:12:12.656 clat (usec): min=222, max=7883, avg=416.87, stdev=412.86 00:12:12.656 lat (usec): min=245, max=7909, avg=443.03, stdev=413.77 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 293], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:12:12.656 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:12:12.656 | 70.00th=[ 388], 80.00th=[ 465], 90.00th=[ 502], 95.00th=[ 529], 00:12:12.656 | 99.00th=[ 758], 99.50th=[ 3228], 99.90th=[ 7504], 99.95th=[ 7898], 00:12:12.656 | 99.99th=[ 7898] 00:12:12.656 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:12.656 slat (usec): min=21, max=108, avg=35.69, stdev=10.35 00:12:12.656 clat (usec): min=132, max=513, avg=262.78, stdev=81.46 00:12:12.656 lat (usec): min=163, max=555, avg=298.47, stdev=87.10 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 180], 00:12:12.656 | 30.00th=[ 227], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 273], 00:12:12.656 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 392], 95.00th=[ 441], 00:12:12.656 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 515], 00:12:12.656 | 99.99th=[ 515] 00:12:12.656 bw ( KiB/s): min= 6602, max= 6602, per=23.05%, avg=6602.00, stdev= 0.00, samples=1 00:12:12.656 iops : min= 1650, max= 1650, avg=1650.00, stdev= 0.00, samples=1 00:12:12.656 lat (usec) : 250=22.84%, 500=72.20%, 750=4.49%, 1000=0.15% 00:12:12.656 lat (msec) : 2=0.07%, 4=0.15%, 10=0.11% 00:12:12.656 cpu : usr=2.00%, sys=6.70%, ctx=2741, majf=0, minf=11 00:12:12.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 issued rwts: total=1205,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.656 job3: (groupid=0, jobs=1): err= 0: pid=70694: Mon Dec 16 05:29:52 2024 00:12:12.656 read: IOPS=1430, BW=5722KiB/s (5860kB/s)(5728KiB/1001msec) 00:12:12.656 slat (nsec): min=14272, max=83453, avg=25163.39, stdev=7398.96 00:12:12.656 clat (usec): min=182, max=845, avg=379.75, stdev=106.06 00:12:12.656 lat (usec): min=198, max=892, avg=404.91, stdev=109.84 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 204], 5.00th=[ 265], 10.00th=[ 302], 20.00th=[ 322], 00:12:12.656 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:12:12.656 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 611], 95.00th=[ 635], 00:12:12.656 | 99.00th=[ 676], 99.50th=[ 685], 99.90th=[ 717], 99.95th=[ 848], 00:12:12.656 | 99.99th=[ 848] 00:12:12.656 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:12.656 slat (usec): min=21, max=125, avg=32.14, stdev= 8.30 00:12:12.656 clat (usec): min=124, max=525, avg=236.28, stdev=48.59 00:12:12.656 lat (usec): min=146, max=551, avg=268.42, stdev=50.75 00:12:12.656 clat percentiles (usec): 00:12:12.656 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 169], 20.00th=[ 188], 00:12:12.656 | 30.00th=[ 208], 40.00th=[ 229], 50.00th=[ 245], 60.00th=[ 255], 00:12:12.656 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 302], 00:12:12.656 | 99.00th=[ 363], 99.50th=[ 412], 99.90th=[ 523], 99.95th=[ 529], 00:12:12.656 | 99.99th=[ 529] 00:12:12.656 bw ( KiB/s): min= 8175, max= 8175, per=28.54%, avg=8175.00, stdev= 0.00, samples=1 00:12:12.656 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:12.656 lat (usec) : 250=30.63%, 500=62.90%, 750=6.44%, 1000=0.03% 00:12:12.656 cpu : usr=1.60%, sys=7.00%, ctx=2971, majf=0, minf=4 00:12:12.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.656 issued rwts: total=1432,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.656 00:12:12.656 Run status group 0 (all jobs): 00:12:12.656 READ: bw=23.3MiB/s (24.4MB/s), 4815KiB/s-6645KiB/s (4931kB/s-6805kB/s), io=23.3MiB (24.4MB), run=1001-1001msec 00:12:12.656 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:12:12.656 00:12:12.656 Disk stats (read/write): 00:12:12.656 nvme0n1: ios=1586/1615, merge=0/0, ticks=505/336, in_queue=841, util=88.38% 00:12:12.656 nvme0n2: ios=1571/1615, merge=0/0, ticks=442/339, in_queue=781, util=88.10% 00:12:12.656 nvme0n3: ios=1024/1210, merge=0/0, ticks=425/363, in_queue=788, util=87.94% 00:12:12.656 nvme0n4: ios=1130/1536, merge=0/0, ticks=417/383, in_queue=800, util=89.64% 00:12:12.656 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:12.656 [global] 00:12:12.656 thread=1 00:12:12.656 invalidate=1 00:12:12.656 rw=randwrite 00:12:12.656 time_based=1 00:12:12.656 runtime=1 00:12:12.656 ioengine=libaio 00:12:12.656 direct=1 00:12:12.656 bs=4096 00:12:12.656 iodepth=1 00:12:12.656 norandommap=0 00:12:12.656 numjobs=1 00:12:12.656 00:12:12.656 verify_dump=1 00:12:12.656 verify_backlog=512 00:12:12.656 verify_state_save=0 00:12:12.656 do_verify=1 00:12:12.656 verify=crc32c-intel 00:12:12.656 [job0] 00:12:12.656 filename=/dev/nvme0n1 00:12:12.656 [job1] 00:12:12.656 filename=/dev/nvme0n2 00:12:12.656 [job2] 00:12:12.656 filename=/dev/nvme0n3 00:12:12.656 [job3] 00:12:12.656 filename=/dev/nvme0n4 00:12:12.656 Could not set queue depth (nvme0n1) 00:12:12.656 Could not set queue depth (nvme0n2) 00:12:12.656 Could not set queue depth (nvme0n3) 00:12:12.656 Could not set queue depth (nvme0n4) 00:12:12.915 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.915 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.915 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.915 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.915 fio-3.35 00:12:12.915 Starting 4 threads 00:12:14.292 00:12:14.292 job0: (groupid=0, jobs=1): err= 0: pid=70753: Mon Dec 16 05:29:54 2024 00:12:14.292 read: IOPS=1442, BW=5770KiB/s (5909kB/s)(5776KiB/1001msec) 00:12:14.292 slat (nsec): min=17811, max=57782, avg=23090.22, stdev=4982.25 00:12:14.292 clat (usec): min=207, max=679, avg=351.57, stdev=54.37 00:12:14.292 lat (usec): min=229, max=705, avg=374.66, stdev=56.34 00:12:14.292 clat percentiles (usec): 00:12:14.292 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:12:14.293 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:12:14.293 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 412], 00:12:14.293 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 676], 99.95th=[ 676], 00:12:14.293 | 99.99th=[ 676] 00:12:14.293 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:14.293 slat (usec): min=26, max=112, avg=34.99, stdev= 6.56 00:12:14.293 clat (usec): min=129, max=485, avg=258.24, stdev=33.64 00:12:14.293 lat (usec): min=162, max=521, avg=293.23, stdev=34.56 00:12:14.293 clat percentiles (usec): 00:12:14.293 | 1.00th=[ 145], 5.00th=[ 176], 10.00th=[ 225], 20.00th=[ 247], 00:12:14.293 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:12:14.293 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:12:14.293 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 400], 99.95th=[ 486], 00:12:14.293 | 99.99th=[ 486] 00:12:14.293 bw ( KiB/s): min= 8192, max= 8192, per=23.75%, avg=8192.00, stdev= 0.00, samples=1 00:12:14.293 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:14.293 lat (usec) : 250=13.39%, 500=85.13%, 750=1.48% 00:12:14.293 cpu : usr=2.00%, sys=7.00%, ctx=2980, majf=0, minf=9 00:12:14.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 issued rwts: total=1444,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.293 job1: (groupid=0, jobs=1): err= 0: pid=70754: Mon Dec 16 05:29:54 2024 00:12:14.293 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:14.293 slat (nsec): min=11940, max=46664, avg=13724.75, stdev=3497.61 00:12:14.293 clat (usec): min=166, max=646, avg=193.87, stdev=17.80 00:12:14.293 lat (usec): min=178, max=662, avg=207.60, stdev=18.01 00:12:14.293 clat percentiles (usec): 00:12:14.293 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:12:14.293 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:12:14.293 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 221], 00:12:14.293 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 314], 99.95th=[ 416], 00:12:14.293 | 99.99th=[ 644] 00:12:14.293 write: IOPS=2996, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:12:14.293 slat (usec): min=14, max=101, avg=19.73, stdev= 4.68 00:12:14.293 clat (usec): min=110, max=1669, avg=133.78, stdev=31.09 00:12:14.293 lat (usec): min=129, max=1688, avg=153.50, stdev=31.83 00:12:14.293 clat percentiles (usec): 00:12:14.293 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 123], 00:12:14.293 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 135], 00:12:14.293 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 159], 00:12:14.293 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 210], 99.95th=[ 326], 00:12:14.293 | 99.99th=[ 1663] 00:12:14.293 bw ( KiB/s): min=12288, max=12288, per=35.63%, avg=12288.00, stdev= 0.00, samples=1 00:12:14.293 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:14.293 lat (usec) : 250=99.84%, 500=0.13%, 750=0.02% 00:12:14.293 lat (msec) : 2=0.02% 00:12:14.293 cpu : usr=2.10%, sys=7.20%, ctx=5559, majf=0, minf=9 00:12:14.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 issued rwts: total=2560,2999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.293 job2: (groupid=0, jobs=1): err= 0: pid=70755: Mon Dec 16 05:29:54 2024 00:12:14.293 read: IOPS=2269, BW=9076KiB/s (9294kB/s)(9076KiB/1000msec) 00:12:14.293 slat (nsec): min=11673, max=44048, avg=14699.42, stdev=3396.59 00:12:14.293 clat (usec): min=173, max=2223, avg=212.59, stdev=61.23 00:12:14.293 lat (usec): min=186, max=2239, avg=227.29, stdev=61.43 00:12:14.293 clat percentiles (usec): 00:12:14.293 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:12:14.293 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:12:14.293 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 241], 00:12:14.293 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 775], 99.95th=[ 1893], 00:12:14.293 | 99.99th=[ 2212] 00:12:14.293 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:12:14.293 slat (usec): min=15, max=105, avg=21.89, stdev= 5.46 00:12:14.293 clat (usec): min=123, max=830, avg=163.90, stdev=20.47 00:12:14.293 lat (usec): min=141, max=852, avg=185.79, stdev=21.21 00:12:14.293 clat percentiles (usec): 00:12:14.293 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:12:14.293 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:12:14.293 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:12:14.293 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 245], 99.95th=[ 347], 00:12:14.293 | 99.99th=[ 832] 00:12:14.293 bw ( KiB/s): min=11096, max=11096, per=32.17%, avg=11096.00, stdev= 0.00, samples=1 00:12:14.293 iops : min= 2774, max= 2774, avg=2774.00, stdev= 0.00, samples=1 00:12:14.293 lat (usec) : 250=98.80%, 500=1.06%, 750=0.06%, 1000=0.04% 00:12:14.293 lat (msec) : 2=0.02%, 4=0.02% 00:12:14.293 cpu : usr=2.40%, sys=6.60%, ctx=4829, majf=0, minf=15 00:12:14.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 issued rwts: total=2269,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.293 job3: (groupid=0, jobs=1): err= 0: pid=70756: Mon Dec 16 05:29:54 2024 00:12:14.293 read: IOPS=1396, BW=5586KiB/s (5720kB/s)(5592KiB/1001msec) 00:12:14.293 slat (nsec): min=17034, max=71680, avg=23249.38, stdev=6026.71 00:12:14.293 clat (usec): min=224, max=863, avg=349.56, stdev=40.74 00:12:14.293 lat (usec): min=245, max=894, avg=372.81, stdev=43.61 00:12:14.293 clat percentiles (usec): 00:12:14.293 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:12:14.293 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:12:14.293 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 416], 00:12:14.293 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 865], 00:12:14.293 | 99.99th=[ 865] 00:12:14.293 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:14.293 slat (usec): min=22, max=157, avg=35.90, stdev= 7.75 00:12:14.293 clat (usec): min=142, max=859, avg=270.65, stdev=52.09 00:12:14.293 lat (usec): min=168, max=895, avg=306.55, stdev=55.16 00:12:14.293 clat percentiles (usec): 00:12:14.293 | 1.00th=[ 155], 5.00th=[ 180], 10.00th=[ 241], 20.00th=[ 249], 00:12:14.293 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:12:14.293 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 392], 00:12:14.293 | 99.00th=[ 453], 99.50th=[ 478], 99.90th=[ 586], 99.95th=[ 857], 00:12:14.293 | 99.99th=[ 857] 00:12:14.293 bw ( KiB/s): min= 7808, max= 7808, per=22.64%, avg=7808.00, stdev= 0.00, samples=1 00:12:14.293 iops : min= 1952, max= 1952, avg=1952.00, stdev= 0.00, samples=1 00:12:14.293 lat (usec) : 250=10.67%, 500=88.31%, 750=0.95%, 1000=0.07% 00:12:14.293 cpu : usr=2.20%, sys=6.60%, ctx=2934, majf=0, minf=13 00:12:14.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.293 issued rwts: total=1398,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.293 00:12:14.293 Run status group 0 (all jobs): 00:12:14.293 READ: bw=29.9MiB/s (31.4MB/s), 5586KiB/s-9.99MiB/s (5720kB/s-10.5MB/s), io=30.0MiB (31.4MB), run=1000-1001msec 00:12:14.293 WRITE: bw=33.7MiB/s (35.3MB/s), 6138KiB/s-11.7MiB/s (6285kB/s-12.3MB/s), io=33.7MiB (35.4MB), run=1000-1001msec 00:12:14.293 00:12:14.293 Disk stats (read/write): 00:12:14.293 nvme0n1: ios=1113/1536, merge=0/0, ticks=431/422, in_queue=853, util=88.58% 00:12:14.293 nvme0n2: ios=2247/2560, merge=0/0, ticks=458/366, in_queue=824, util=88.16% 00:12:14.293 nvme0n3: ios=2048/2060, merge=0/0, ticks=437/350, in_queue=787, util=89.12% 00:12:14.293 nvme0n4: ios=1024/1514, merge=0/0, ticks=371/430, in_queue=801, util=89.68% 00:12:14.293 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:14.293 [global] 00:12:14.293 thread=1 00:12:14.293 invalidate=1 00:12:14.293 rw=write 00:12:14.293 time_based=1 00:12:14.293 runtime=1 00:12:14.293 ioengine=libaio 00:12:14.293 direct=1 00:12:14.293 bs=4096 00:12:14.293 iodepth=128 00:12:14.293 norandommap=0 00:12:14.293 numjobs=1 00:12:14.293 00:12:14.293 verify_dump=1 00:12:14.293 verify_backlog=512 00:12:14.293 verify_state_save=0 00:12:14.293 do_verify=1 00:12:14.293 verify=crc32c-intel 00:12:14.293 [job0] 00:12:14.293 filename=/dev/nvme0n1 00:12:14.293 [job1] 00:12:14.293 filename=/dev/nvme0n2 00:12:14.293 [job2] 00:12:14.293 filename=/dev/nvme0n3 00:12:14.293 [job3] 00:12:14.293 filename=/dev/nvme0n4 00:12:14.293 Could not set queue depth (nvme0n1) 00:12:14.293 Could not set queue depth (nvme0n2) 00:12:14.293 Could not set queue depth (nvme0n3) 00:12:14.293 Could not set queue depth (nvme0n4) 00:12:14.293 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.293 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.293 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.293 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.293 fio-3.35 00:12:14.293 Starting 4 threads 00:12:15.672 00:12:15.672 job0: (groupid=0, jobs=1): err= 0: pid=70811: Mon Dec 16 05:29:55 2024 00:12:15.672 read: IOPS=4344, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1002msec) 00:12:15.672 slat (usec): min=9, max=3453, avg=109.73, stdev=519.39 00:12:15.672 clat (usec): min=488, max=15815, avg=14408.78, stdev=1313.98 00:12:15.673 lat (usec): min=3624, max=15845, avg=14518.51, stdev=1209.86 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[ 7570], 5.00th=[12256], 10.00th=[13960], 20.00th=[14222], 00:12:15.673 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14615], 00:12:15.673 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15401], 95.00th=[15533], 00:12:15.673 | 99.00th=[15664], 99.50th=[15664], 99.90th=[15795], 99.95th=[15795], 00:12:15.673 | 99.99th=[15795] 00:12:15.673 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:12:15.673 slat (usec): min=8, max=3775, avg=105.34, stdev=454.44 00:12:15.673 clat (usec): min=10532, max=15360, avg=13844.94, stdev=632.64 00:12:15.673 lat (usec): min=10716, max=15500, avg=13950.28, stdev=438.35 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[11076], 5.00th=[13173], 10.00th=[13304], 20.00th=[13566], 00:12:15.673 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:12:15.673 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14615], 95.00th=[14746], 00:12:15.673 | 99.00th=[15139], 99.50th=[15270], 99.90th=[15401], 99.95th=[15401], 00:12:15.673 | 99.99th=[15401] 00:12:15.673 bw ( KiB/s): min=18424, max=18440, per=26.64%, avg=18432.00, stdev=11.31, samples=2 00:12:15.673 iops : min= 4606, max= 4610, avg=4608.00, stdev= 2.83, samples=2 00:12:15.673 lat (usec) : 500=0.01% 00:12:15.673 lat (msec) : 4=0.18%, 10=0.54%, 20=99.27% 00:12:15.673 cpu : usr=4.80%, sys=12.99%, ctx=282, majf=0, minf=2 00:12:15.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:15.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.673 issued rwts: total=4353,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.673 job1: (groupid=0, jobs=1): err= 0: pid=70812: Mon Dec 16 05:29:55 2024 00:12:15.673 read: IOPS=4380, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1001msec) 00:12:15.673 slat (usec): min=5, max=3440, avg=109.17, stdev=515.19 00:12:15.673 clat (usec): min=509, max=15990, avg=14342.14, stdev=1314.08 00:12:15.673 lat (usec): min=3585, max=16014, avg=14451.31, stdev=1211.33 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[ 7570], 5.00th=[12256], 10.00th=[13829], 20.00th=[14091], 00:12:15.673 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:12:15.673 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15270], 95.00th=[15401], 00:12:15.673 | 99.00th=[15664], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:12:15.673 | 99.99th=[16057] 00:12:15.673 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:12:15.673 slat (usec): min=12, max=3221, avg=104.68, stdev=449.52 00:12:15.673 clat (usec): min=10410, max=14891, avg=13779.62, stdev=572.43 00:12:15.673 lat (usec): min=11217, max=15271, avg=13884.30, stdev=351.73 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[11076], 5.00th=[13173], 10.00th=[13435], 20.00th=[13566], 00:12:15.673 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:12:15.673 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[14484], 00:12:15.673 | 99.00th=[14746], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:12:15.673 | 99.99th=[14877] 00:12:15.673 bw ( KiB/s): min=18424, max=18476, per=26.66%, avg=18450.00, stdev=36.77, samples=2 00:12:15.673 iops : min= 4606, max= 4619, avg=4612.50, stdev= 9.19, samples=2 00:12:15.673 lat (usec) : 750=0.01% 00:12:15.673 lat (msec) : 4=0.20%, 10=0.51%, 20=99.28% 00:12:15.673 cpu : usr=4.30%, sys=13.80%, ctx=282, majf=0, minf=3 00:12:15.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:15.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.673 issued rwts: total=4385,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.673 job2: (groupid=0, jobs=1): err= 0: pid=70813: Mon Dec 16 05:29:55 2024 00:12:15.673 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:12:15.673 slat (usec): min=8, max=6161, avg=129.32, stdev=526.32 00:12:15.673 clat (usec): min=12481, max=22272, avg=16819.77, stdev=1380.65 00:12:15.673 lat (usec): min=12515, max=22299, avg=16949.09, stdev=1443.87 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[13173], 5.00th=[14353], 10.00th=[15533], 20.00th=[16188], 00:12:15.673 | 30.00th=[16450], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:12:15.673 | 70.00th=[16909], 80.00th=[17171], 90.00th=[19006], 95.00th=[19530], 00:12:15.673 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21890], 99.95th=[22152], 00:12:15.673 | 99.99th=[22152] 00:12:15.673 write: IOPS=4052, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1005msec); 0 zone resets 00:12:15.673 slat (usec): min=10, max=4902, avg=122.98, stdev=516.15 00:12:15.673 clat (usec): min=4290, max=21791, avg=16334.53, stdev=1685.17 00:12:15.673 lat (usec): min=4329, max=21809, avg=16457.52, stdev=1742.67 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[ 9372], 5.00th=[14746], 10.00th=[15401], 20.00th=[15664], 00:12:15.673 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 00:12:15.673 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[19268], 00:12:15.673 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21627], 00:12:15.673 | 99.99th=[21890] 00:12:15.673 bw ( KiB/s): min=15184, max=16416, per=22.83%, avg=15800.00, stdev=871.16, samples=2 00:12:15.673 iops : min= 3796, max= 4104, avg=3950.00, stdev=217.79, samples=2 00:12:15.673 lat (msec) : 10=0.63%, 20=96.57%, 50=2.81% 00:12:15.673 cpu : usr=4.28%, sys=12.45%, ctx=412, majf=0, minf=5 00:12:15.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:15.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.673 issued rwts: total=3584,4073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.673 job3: (groupid=0, jobs=1): err= 0: pid=70814: Mon Dec 16 05:29:55 2024 00:12:15.673 read: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1003msec) 00:12:15.673 slat (usec): min=6, max=7108, avg=130.01, stdev=646.04 00:12:15.673 clat (usec): min=1208, max=23796, avg=16341.53, stdev=2174.18 00:12:15.673 lat (usec): min=4902, max=27563, avg=16471.54, stdev=2215.97 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[ 5669], 5.00th=[12780], 10.00th=[14615], 20.00th=[15795], 00:12:15.673 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:12:15.673 | 70.00th=[16909], 80.00th=[17171], 90.00th=[18220], 95.00th=[20055], 00:12:15.673 | 99.00th=[21890], 99.50th=[22414], 99.90th=[23200], 99.95th=[23462], 00:12:15.673 | 99.99th=[23725] 00:12:15.673 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:12:15.673 slat (usec): min=11, max=7110, avg=116.67, stdev=643.87 00:12:15.673 clat (usec): min=6992, max=22967, avg=15891.02, stdev=1720.26 00:12:15.673 lat (usec): min=7028, max=23490, avg=16007.69, stdev=1820.89 00:12:15.673 clat percentiles (usec): 00:12:15.673 | 1.00th=[10814], 5.00th=[13304], 10.00th=[14615], 20.00th=[15139], 00:12:15.673 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:12:15.673 | 70.00th=[16319], 80.00th=[16712], 90.00th=[16909], 95.00th=[19268], 00:12:15.673 | 99.00th=[21890], 99.50th=[22414], 99.90th=[22938], 99.95th=[22938], 00:12:15.673 | 99.99th=[22938] 00:12:15.673 bw ( KiB/s): min=16384, max=16416, per=23.70%, avg=16400.00, stdev=22.63, samples=2 00:12:15.673 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:12:15.673 lat (msec) : 2=0.01%, 10=1.14%, 20=94.45%, 50=4.40% 00:12:15.673 cpu : usr=4.99%, sys=10.58%, ctx=349, majf=0, minf=3 00:12:15.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:15.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.673 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.673 issued rwts: total=3774,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.673 00:12:15.673 Run status group 0 (all jobs): 00:12:15.673 READ: bw=62.6MiB/s (65.6MB/s), 13.9MiB/s-17.1MiB/s (14.6MB/s-17.9MB/s), io=62.9MiB (65.9MB), run=1001-1005msec 00:12:15.673 WRITE: bw=67.6MiB/s (70.9MB/s), 15.8MiB/s-18.0MiB/s (16.6MB/s-18.9MB/s), io=67.9MiB (71.2MB), run=1001-1005msec 00:12:15.673 00:12:15.673 Disk stats (read/write): 00:12:15.673 nvme0n1: ios=3633/4096, merge=0/0, ticks=11690/12114, in_queue=23804, util=88.55% 00:12:15.673 nvme0n2: ios=3658/4096, merge=0/0, ticks=11781/12098, in_queue=23879, util=88.20% 00:12:15.673 nvme0n3: ios=3072/3471, merge=0/0, ticks=16510/16498, in_queue=33008, util=89.00% 00:12:15.673 nvme0n4: ios=3083/3584, merge=0/0, ticks=24613/25100, in_queue=49713, util=89.66% 00:12:15.673 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:15.673 [global] 00:12:15.673 thread=1 00:12:15.673 invalidate=1 00:12:15.673 rw=randwrite 00:12:15.673 time_based=1 00:12:15.673 runtime=1 00:12:15.673 ioengine=libaio 00:12:15.673 direct=1 00:12:15.673 bs=4096 00:12:15.673 iodepth=128 00:12:15.673 norandommap=0 00:12:15.673 numjobs=1 00:12:15.673 00:12:15.673 verify_dump=1 00:12:15.673 verify_backlog=512 00:12:15.673 verify_state_save=0 00:12:15.673 do_verify=1 00:12:15.673 verify=crc32c-intel 00:12:15.673 [job0] 00:12:15.673 filename=/dev/nvme0n1 00:12:15.673 [job1] 00:12:15.673 filename=/dev/nvme0n2 00:12:15.673 [job2] 00:12:15.673 filename=/dev/nvme0n3 00:12:15.673 [job3] 00:12:15.673 filename=/dev/nvme0n4 00:12:15.673 Could not set queue depth (nvme0n1) 00:12:15.673 Could not set queue depth (nvme0n2) 00:12:15.673 Could not set queue depth (nvme0n3) 00:12:15.673 Could not set queue depth (nvme0n4) 00:12:15.673 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.673 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.673 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.673 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.673 fio-3.35 00:12:15.673 Starting 4 threads 00:12:17.052 00:12:17.052 job0: (groupid=0, jobs=1): err= 0: pid=70871: Mon Dec 16 05:29:56 2024 00:12:17.052 read: IOPS=2665, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1008msec) 00:12:17.052 slat (usec): min=7, max=28954, avg=189.36, stdev=1409.60 00:12:17.052 clat (usec): min=1778, max=51518, avg=25583.51, stdev=6971.38 00:12:17.052 lat (usec): min=13001, max=51549, avg=25772.87, stdev=7030.93 00:12:17.052 clat percentiles (usec): 00:12:17.052 | 1.00th=[13960], 5.00th=[17171], 10.00th=[18482], 20.00th=[19006], 00:12:17.052 | 30.00th=[19268], 40.00th=[20841], 50.00th=[26870], 60.00th=[27919], 00:12:17.052 | 70.00th=[28705], 80.00th=[29230], 90.00th=[36439], 95.00th=[39060], 00:12:17.052 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[47973], 00:12:17.052 | 99.99th=[51643] 00:12:17.052 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:12:17.052 slat (usec): min=10, max=12001, avg=153.37, stdev=1013.15 00:12:17.052 clat (usec): min=7533, max=42319, avg=19189.65, stdev=5201.37 00:12:17.052 lat (usec): min=10916, max=42367, avg=19343.02, stdev=5149.28 00:12:17.052 clat percentiles (usec): 00:12:17.052 | 1.00th=[11207], 5.00th=[13173], 10.00th=[13435], 20.00th=[14091], 00:12:17.052 | 30.00th=[14615], 40.00th=[15664], 50.00th=[17433], 60.00th=[21365], 00:12:17.052 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26084], 95.00th=[26608], 00:12:17.052 | 99.00th=[26870], 99.50th=[26870], 99.90th=[27395], 99.95th=[40633], 00:12:17.052 | 99.99th=[42206] 00:12:17.052 bw ( KiB/s): min=10744, max=13824, per=20.85%, avg=12284.00, stdev=2177.89, samples=2 00:12:17.052 iops : min= 2686, max= 3456, avg=3071.00, stdev=544.47, samples=2 00:12:17.052 lat (msec) : 2=0.02%, 10=0.30%, 20=47.21%, 50=52.46%, 100=0.02% 00:12:17.052 cpu : usr=2.48%, sys=9.53%, ctx=120, majf=0, minf=9 00:12:17.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:17.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.052 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.052 job1: (groupid=0, jobs=1): err= 0: pid=70872: Mon Dec 16 05:29:56 2024 00:12:17.052 read: IOPS=4909, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1003msec) 00:12:17.052 slat (usec): min=8, max=5956, avg=100.84, stdev=490.15 00:12:17.052 clat (usec): min=696, max=18154, avg=12872.29, stdev=1411.05 00:12:17.052 lat (usec): min=6027, max=23033, avg=12973.13, stdev=1437.68 00:12:17.052 clat percentiles (usec): 00:12:17.052 | 1.00th=[ 6915], 5.00th=[11076], 10.00th=[11731], 20.00th=[12256], 00:12:17.052 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:12:17.052 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13960], 95.00th=[15139], 00:12:17.052 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:12:17.052 | 99.99th=[18220] 00:12:17.052 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:12:17.052 slat (usec): min=10, max=5749, avg=91.06, stdev=521.87 00:12:17.052 clat (usec): min=6257, max=18983, avg=12380.09, stdev=1341.19 00:12:17.052 lat (usec): min=6301, max=19033, avg=12471.15, stdev=1426.66 00:12:17.052 clat percentiles (usec): 00:12:17.052 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11338], 20.00th=[11863], 00:12:17.052 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:12:17.052 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13829], 95.00th=[14353], 00:12:17.053 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:12:17.053 | 99.99th=[19006] 00:12:17.053 bw ( KiB/s): min=20480, max=20521, per=34.79%, avg=20500.50, stdev=28.99, samples=2 00:12:17.053 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:12:17.053 lat (usec) : 750=0.01% 00:12:17.053 lat (msec) : 10=2.93%, 20=97.06% 00:12:17.053 cpu : usr=4.79%, sys=12.67%, ctx=339, majf=0, minf=9 00:12:17.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:17.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.053 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.053 job2: (groupid=0, jobs=1): err= 0: pid=70873: Mon Dec 16 05:29:56 2024 00:12:17.053 read: IOPS=4385, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1008msec) 00:12:17.053 slat (usec): min=7, max=7597, avg=106.83, stdev=675.75 00:12:17.053 clat (usec): min=1694, max=23865, avg=14794.49, stdev=1822.92 00:12:17.053 lat (usec): min=7386, max=28501, avg=14901.32, stdev=1836.59 00:12:17.053 clat percentiles (usec): 00:12:17.053 | 1.00th=[ 7898], 5.00th=[10290], 10.00th=[14091], 20.00th=[14353], 00:12:17.053 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:12:17.053 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15926], 95.00th=[16450], 00:12:17.053 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23725], 99.95th=[23725], 00:12:17.053 | 99.99th=[23987] 00:12:17.053 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:12:17.053 slat (usec): min=5, max=11882, avg=107.81, stdev=666.85 00:12:17.053 clat (usec): min=7116, max=20182, avg=13535.54, stdev=1607.52 00:12:17.053 lat (usec): min=7377, max=20207, avg=13643.35, stdev=1503.75 00:12:17.053 clat percentiles (usec): 00:12:17.053 | 1.00th=[ 7767], 5.00th=[12125], 10.00th=[12387], 20.00th=[12911], 00:12:17.053 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:12:17.053 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14746], 95.00th=[15008], 00:12:17.053 | 99.00th=[20055], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:12:17.053 | 99.99th=[20055] 00:12:17.053 bw ( KiB/s): min=17416, max=19486, per=31.32%, avg=18451.00, stdev=1463.71, samples=2 00:12:17.053 iops : min= 4354, max= 4871, avg=4612.50, stdev=365.57, samples=2 00:12:17.053 lat (msec) : 2=0.01%, 10=3.44%, 20=95.45%, 50=1.10% 00:12:17.053 cpu : usr=3.97%, sys=12.81%, ctx=190, majf=0, minf=13 00:12:17.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:17.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.053 issued rwts: total=4421,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.053 job3: (groupid=0, jobs=1): err= 0: pid=70875: Mon Dec 16 05:29:56 2024 00:12:17.053 read: IOPS=1776, BW=7106KiB/s (7277kB/s)(7156KiB/1007msec) 00:12:17.053 slat (usec): min=9, max=19169, avg=229.04, stdev=1192.12 00:12:17.053 clat (usec): min=1885, max=79282, avg=30834.57, stdev=10563.65 00:12:17.053 lat (usec): min=9154, max=79298, avg=31063.61, stdev=10624.00 00:12:17.053 clat percentiles (usec): 00:12:17.053 | 1.00th=[ 9372], 5.00th=[20841], 10.00th=[24773], 20.00th=[26870], 00:12:17.053 | 30.00th=[27132], 40.00th=[27919], 50.00th=[28181], 60.00th=[28705], 00:12:17.053 | 70.00th=[29754], 80.00th=[33162], 90.00th=[40109], 95.00th=[53216], 00:12:17.053 | 99.00th=[72877], 99.50th=[72877], 99.90th=[79168], 99.95th=[79168], 00:12:17.053 | 99.99th=[79168] 00:12:17.053 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:12:17.053 slat (usec): min=19, max=21607, avg=281.10, stdev=1470.87 00:12:17.053 clat (usec): min=14164, max=89786, avg=34694.12, stdev=17484.84 00:12:17.053 lat (usec): min=14214, max=89811, avg=34975.23, stdev=17606.01 00:12:17.053 clat percentiles (usec): 00:12:17.053 | 1.00th=[16057], 5.00th=[20579], 10.00th=[22938], 20.00th=[24773], 00:12:17.053 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:12:17.053 | 70.00th=[28967], 80.00th=[50594], 90.00th=[65799], 95.00th=[72877], 00:12:17.053 | 99.00th=[85459], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:12:17.053 | 99.99th=[89654] 00:12:17.053 bw ( KiB/s): min= 6816, max= 9587, per=13.92%, avg=8201.50, stdev=1959.39, samples=2 00:12:17.053 iops : min= 1704, max= 2396, avg=2050.00, stdev=489.32, samples=2 00:12:17.053 lat (msec) : 2=0.03%, 10=1.36%, 20=1.90%, 50=81.83%, 100=14.88% 00:12:17.053 cpu : usr=1.49%, sys=7.26%, ctx=167, majf=0, minf=21 00:12:17.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:17.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:17.053 issued rwts: total=1789,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:17.053 00:12:17.053 Run status group 0 (all jobs): 00:12:17.053 READ: bw=53.6MiB/s (56.2MB/s), 7106KiB/s-19.2MiB/s (7277kB/s-20.1MB/s), io=54.0MiB (56.6MB), run=1003-1008msec 00:12:17.053 WRITE: bw=57.5MiB/s (60.3MB/s), 8135KiB/s-19.9MiB/s (8330kB/s-20.9MB/s), io=58.0MiB (60.8MB), run=1003-1008msec 00:12:17.053 00:12:17.053 Disk stats (read/write): 00:12:17.053 nvme0n1: ios=2166/2560, merge=0/0, ticks=55506/47337, in_queue=102843, util=88.08% 00:12:17.053 nvme0n2: ios=4145/4527, merge=0/0, ticks=25618/23782, in_queue=49400, util=88.27% 00:12:17.053 nvme0n3: ios=3584/4032, merge=0/0, ticks=50707/50393, in_queue=101100, util=89.20% 00:12:17.053 nvme0n4: ios=1536/1895, merge=0/0, ticks=20929/30040, in_queue=50969, util=89.66% 00:12:17.053 05:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:17.053 05:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70894 00:12:17.053 05:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:17.053 05:29:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:17.053 [global] 00:12:17.053 thread=1 00:12:17.053 invalidate=1 00:12:17.053 rw=read 00:12:17.053 time_based=1 00:12:17.053 runtime=10 00:12:17.053 ioengine=libaio 00:12:17.053 direct=1 00:12:17.053 bs=4096 00:12:17.053 iodepth=1 00:12:17.053 norandommap=1 00:12:17.053 numjobs=1 00:12:17.053 00:12:17.053 [job0] 00:12:17.053 filename=/dev/nvme0n1 00:12:17.053 [job1] 00:12:17.053 filename=/dev/nvme0n2 00:12:17.053 [job2] 00:12:17.053 filename=/dev/nvme0n3 00:12:17.053 [job3] 00:12:17.053 filename=/dev/nvme0n4 00:12:17.053 Could not set queue depth (nvme0n1) 00:12:17.053 Could not set queue depth (nvme0n2) 00:12:17.053 Could not set queue depth (nvme0n3) 00:12:17.053 Could not set queue depth (nvme0n4) 00:12:17.053 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.053 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.053 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.053 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.053 fio-3.35 00:12:17.053 Starting 4 threads 00:12:20.342 05:29:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:20.342 fio: pid=70937, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:20.342 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34127872, buflen=4096 00:12:20.342 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:20.342 fio: pid=70936, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:20.342 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39784448, buflen=4096 00:12:20.600 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.600 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:20.858 fio: pid=70934, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:20.858 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42561536, buflen=4096 00:12:20.858 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.858 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:21.117 fio: pid=70935, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:21.117 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=51679232, buflen=4096 00:12:21.117 00:12:21.117 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70934: Mon Dec 16 05:30:01 2024 00:12:21.117 read: IOPS=2906, BW=11.4MiB/s (11.9MB/s)(40.6MiB/3575msec) 00:12:21.117 slat (usec): min=8, max=14646, avg=20.95, stdev=208.07 00:12:21.117 clat (usec): min=166, max=2717, avg=321.31, stdev=58.05 00:12:21.117 lat (usec): min=178, max=15004, avg=342.26, stdev=216.77 00:12:21.117 clat percentiles (usec): 00:12:21.117 | 1.00th=[ 194], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 289], 00:12:21.117 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:12:21.117 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 379], 00:12:21.117 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 627], 99.95th=[ 881], 00:12:21.117 | 99.99th=[ 2278] 00:12:21.117 bw ( KiB/s): min=10072, max=11648, per=27.63%, avg=11348.00, stdev=626.39, samples=6 00:12:21.117 iops : min= 2518, max= 2912, avg=2837.00, stdev=156.60, samples=6 00:12:21.117 lat (usec) : 250=2.19%, 500=96.39%, 750=1.34%, 1000=0.02% 00:12:21.117 lat (msec) : 2=0.03%, 4=0.02% 00:12:21.117 cpu : usr=1.09%, sys=4.42%, ctx=10403, majf=0, minf=1 00:12:21.117 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.117 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.117 issued rwts: total=10392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.117 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.117 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70935: Mon Dec 16 05:30:01 2024 00:12:21.117 read: IOPS=3156, BW=12.3MiB/s (12.9MB/s)(49.3MiB/3998msec) 00:12:21.117 slat (usec): min=8, max=13693, avg=22.18, stdev=220.40 00:12:21.117 clat (usec): min=162, max=3988, avg=292.98, stdev=74.60 00:12:21.117 lat (usec): min=177, max=16233, avg=315.16, stdev=247.97 00:12:21.117 clat percentiles (usec): 00:12:21.117 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 260], 00:12:21.117 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:12:21.117 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 359], 00:12:21.117 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 685], 99.95th=[ 914], 00:12:21.117 | 99.99th=[ 3064] 00:12:21.117 bw ( KiB/s): min=11576, max=12805, per=28.98%, avg=11904.71, stdev=440.17, samples=7 00:12:21.117 iops : min= 2894, max= 3201, avg=2976.14, stdev=109.96, samples=7 00:12:21.117 lat (usec) : 250=18.71%, 500=81.10%, 750=0.10%, 1000=0.05% 00:12:21.117 lat (msec) : 2=0.02%, 4=0.02% 00:12:21.117 cpu : usr=1.15%, sys=4.85%, ctx=12628, majf=0, minf=1 00:12:21.117 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.118 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.118 issued rwts: total=12618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.118 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70936: Mon Dec 16 05:30:01 2024 00:12:21.118 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(37.9MiB/3279msec) 00:12:21.118 slat (usec): min=9, max=10420, avg=17.65, stdev=137.84 00:12:21.118 clat (usec): min=178, max=3892, avg=318.29, stdev=84.82 00:12:21.118 lat (usec): min=192, max=10675, avg=335.94, stdev=161.11 00:12:21.118 clat percentiles (usec): 00:12:21.118 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 215], 20.00th=[ 297], 00:12:21.118 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:12:21.118 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 375], 00:12:21.118 | 99.00th=[ 404], 99.50th=[ 523], 99.90th=[ 963], 99.95th=[ 2114], 00:12:21.118 | 99.99th=[ 3884] 00:12:21.118 bw ( KiB/s): min=11072, max=11896, per=27.90%, avg=11461.33, stdev=279.18, samples=6 00:12:21.118 iops : min= 2768, max= 2974, avg=2865.33, stdev=69.80, samples=6 00:12:21.118 lat (usec) : 250=13.91%, 500=85.54%, 750=0.33%, 1000=0.12% 00:12:21.118 lat (msec) : 2=0.04%, 4=0.05% 00:12:21.118 cpu : usr=1.40%, sys=3.90%, ctx=9716, majf=0, minf=2 00:12:21.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.118 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.118 issued rwts: total=9714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.118 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70937: Mon Dec 16 05:30:01 2024 00:12:21.118 read: IOPS=2819, BW=11.0MiB/s (11.5MB/s)(32.5MiB/2955msec) 00:12:21.118 slat (nsec): min=9985, max=87785, avg=18740.30, stdev=6426.89 00:12:21.118 clat (usec): min=195, max=7894, avg=333.80, stdev=117.10 00:12:21.118 lat (usec): min=211, max=7912, avg=352.54, stdev=117.48 00:12:21.118 clat percentiles (usec): 00:12:21.118 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:12:21.118 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 334], 00:12:21.118 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 371], 00:12:21.118 | 99.00th=[ 408], 99.50th=[ 545], 99.90th=[ 1188], 99.95th=[ 2442], 00:12:21.118 | 99.99th=[ 7898] 00:12:21.118 bw ( KiB/s): min=10704, max=11592, per=27.39%, avg=11248.00, stdev=359.78, samples=5 00:12:21.118 iops : min= 2676, max= 2898, avg=2812.00, stdev=89.94, samples=5 00:12:21.118 lat (usec) : 250=0.13%, 500=99.23%, 750=0.42%, 1000=0.08% 00:12:21.118 lat (msec) : 2=0.04%, 4=0.06%, 10=0.02% 00:12:21.118 cpu : usr=0.98%, sys=5.18%, ctx=8336, majf=0, minf=2 00:12:21.118 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.118 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.118 issued rwts: total=8333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.118 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.118 00:12:21.118 Run status group 0 (all jobs): 00:12:21.118 READ: bw=40.1MiB/s (42.1MB/s), 11.0MiB/s-12.3MiB/s (11.5MB/s-12.9MB/s), io=160MiB (168MB), run=2955-3998msec 00:12:21.118 00:12:21.118 Disk stats (read/write): 00:12:21.118 nvme0n1: ios=9692/0, merge=0/0, ticks=3163/0, in_queue=3163, util=95.22% 00:12:21.118 nvme0n2: ios=11986/0, merge=0/0, ticks=3681/0, in_queue=3681, util=95.68% 00:12:21.118 nvme0n3: ios=9015/0, merge=0/0, ticks=2795/0, in_queue=2795, util=96.27% 00:12:21.118 nvme0n4: ios=8083/0, merge=0/0, ticks=2672/0, in_queue=2672, util=96.42% 00:12:21.376 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:21.376 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:21.943 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:21.943 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:22.202 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.202 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:22.768 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.768 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:23.361 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.361 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:23.619 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:23.619 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70894 00:12:23.619 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:23.619 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:23.877 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:23.878 nvmf hotplug test: fio failed as expected 00:12:23.878 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.136 rmmod nvme_tcp 00:12:24.136 rmmod nvme_fabrics 00:12:24.136 rmmod nvme_keyring 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70496 ']' 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70496 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 70496 ']' 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 70496 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70496 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.136 killing process with pid 70496 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70496' 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 70496 00:12:24.136 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 70496 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:25.513 00:12:25.513 real 0m22.877s 00:12:25.513 user 1m23.887s 00:12:25.513 sys 0m10.972s 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.513 ************************************ 00:12:25.513 END TEST nvmf_fio_target 00:12:25.513 ************************************ 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:25.513 ************************************ 00:12:25.513 START TEST nvmf_bdevio 00:12:25.513 ************************************ 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:25.513 * Looking for test storage... 00:12:25.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:12:25.513 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:25.780 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:25.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.781 --rc genhtml_branch_coverage=1 00:12:25.781 --rc genhtml_function_coverage=1 00:12:25.781 --rc genhtml_legend=1 00:12:25.781 --rc geninfo_all_blocks=1 00:12:25.781 --rc geninfo_unexecuted_blocks=1 00:12:25.781 00:12:25.781 ' 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:25.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.781 --rc genhtml_branch_coverage=1 00:12:25.781 --rc genhtml_function_coverage=1 00:12:25.781 --rc genhtml_legend=1 00:12:25.781 --rc geninfo_all_blocks=1 00:12:25.781 --rc geninfo_unexecuted_blocks=1 00:12:25.781 00:12:25.781 ' 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:25.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.781 --rc genhtml_branch_coverage=1 00:12:25.781 --rc genhtml_function_coverage=1 00:12:25.781 --rc genhtml_legend=1 00:12:25.781 --rc geninfo_all_blocks=1 00:12:25.781 --rc geninfo_unexecuted_blocks=1 00:12:25.781 00:12:25.781 ' 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:25.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.781 --rc genhtml_branch_coverage=1 00:12:25.781 --rc genhtml_function_coverage=1 00:12:25.781 --rc genhtml_legend=1 00:12:25.781 --rc geninfo_all_blocks=1 00:12:25.781 --rc geninfo_unexecuted_blocks=1 00:12:25.781 00:12:25.781 ' 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:25.781 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.782 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.782 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:25.783 Cannot find device "nvmf_init_br" 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:25.783 Cannot find device "nvmf_init_br2" 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:25.783 Cannot find device "nvmf_tgt_br" 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:25.783 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.784 Cannot find device "nvmf_tgt_br2" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:25.784 Cannot find device "nvmf_init_br" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:25.784 Cannot find device "nvmf_init_br2" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:25.784 Cannot find device "nvmf_tgt_br" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:25.784 Cannot find device "nvmf_tgt_br2" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:25.784 Cannot find device "nvmf_br" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:25.784 Cannot find device "nvmf_init_if" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:25.784 Cannot find device "nvmf_init_if2" 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:25.784 05:30:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.784 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.784 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.785 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:26.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:12:26.046 00:12:26.046 --- 10.0.0.3 ping statistics --- 00:12:26.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.046 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:26.046 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:26.046 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:12:26.046 00:12:26.046 --- 10.0.0.4 ping statistics --- 00:12:26.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.046 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:26.046 00:12:26.046 --- 10.0.0.1 ping statistics --- 00:12:26.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.046 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:26.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:26.046 00:12:26.046 --- 10.0.0.2 ping statistics --- 00:12:26.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.046 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=71294 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 71294 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 71294 ']' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.046 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:26.304 [2024-12-16 05:30:06.326648] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:26.304 [2024-12-16 05:30:06.326788] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.304 [2024-12-16 05:30:06.508047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.562 [2024-12-16 05:30:06.639271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.562 [2024-12-16 05:30:06.639340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.562 [2024-12-16 05:30:06.639357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.562 [2024-12-16 05:30:06.639368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.562 [2024-12-16 05:30:06.639379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.562 [2024-12-16 05:30:06.641461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:12:26.562 [2024-12-16 05:30:06.641677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:12:26.562 [2024-12-16 05:30:06.641763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:12:26.562 [2024-12-16 05:30:06.642137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.562 [2024-12-16 05:30:06.817152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.128 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:27.128 [2024-12-16 05:30:07.363948] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:27.386 Malloc0 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:27.386 [2024-12-16 05:30:07.487259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:27.386 { 00:12:27.386 "params": { 00:12:27.386 "name": "Nvme$subsystem", 00:12:27.386 "trtype": "$TEST_TRANSPORT", 00:12:27.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:27.386 "adrfam": "ipv4", 00:12:27.386 "trsvcid": "$NVMF_PORT", 00:12:27.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:27.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:27.386 "hdgst": ${hdgst:-false}, 00:12:27.386 "ddgst": ${ddgst:-false} 00:12:27.386 }, 00:12:27.386 "method": "bdev_nvme_attach_controller" 00:12:27.386 } 00:12:27.386 EOF 00:12:27.386 )") 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:27.386 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:27.386 "params": { 00:12:27.386 "name": "Nvme1", 00:12:27.386 "trtype": "tcp", 00:12:27.386 "traddr": "10.0.0.3", 00:12:27.386 "adrfam": "ipv4", 00:12:27.386 "trsvcid": "4420", 00:12:27.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:27.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:27.386 "hdgst": false, 00:12:27.386 "ddgst": false 00:12:27.386 }, 00:12:27.386 "method": "bdev_nvme_attach_controller" 00:12:27.386 }' 00:12:27.386 [2024-12-16 05:30:07.604207] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:27.386 [2024-12-16 05:30:07.604368] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71330 ] 00:12:27.645 [2024-12-16 05:30:07.796989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.904 [2024-12-16 05:30:07.926715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.904 [2024-12-16 05:30:07.926806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.904 [2024-12-16 05:30:07.926818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.904 [2024-12-16 05:30:08.127733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:28.162 I/O targets: 00:12:28.162 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:28.162 00:12:28.162 00:12:28.162 CUnit - A unit testing framework for C - Version 2.1-3 00:12:28.162 http://cunit.sourceforge.net/ 00:12:28.162 00:12:28.162 00:12:28.162 Suite: bdevio tests on: Nvme1n1 00:12:28.162 Test: blockdev write read block ...passed 00:12:28.162 Test: blockdev write zeroes read block ...passed 00:12:28.162 Test: blockdev write zeroes read no split ...passed 00:12:28.162 Test: blockdev write zeroes read split ...passed 00:12:28.162 Test: blockdev write zeroes read split partial ...passed 00:12:28.162 Test: blockdev reset ...[2024-12-16 05:30:08.393763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:28.162 [2024-12-16 05:30:08.393948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:12:28.162 [2024-12-16 05:30:08.414745] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:28.162 passed 00:12:28.162 Test: blockdev write read 8 blocks ...passed 00:12:28.162 Test: blockdev write read size > 128k ...passed 00:12:28.162 Test: blockdev write read invalid size ...passed 00:12:28.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.162 Test: blockdev write read max offset ...passed 00:12:28.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.162 Test: blockdev writev readv 8 blocks ...passed 00:12:28.420 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.420 Test: blockdev writev readv block ...passed 00:12:28.420 Test: blockdev writev readv size > 128k ...passed 00:12:28.420 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.420 Test: blockdev comparev and writev ...[2024-12-16 05:30:08.425940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.426005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.426037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.426058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.426430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.426470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.426498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.426517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.427118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.427154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.427181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.427203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.427584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.427634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.427661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:28.420 [2024-12-16 05:30:08.427680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:28.420 passed 00:12:28.420 Test: blockdev nvme passthru rw ...passed 00:12:28.420 Test: blockdev nvme passthru vendor specific ...[2024-12-16 05:30:08.428756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:28.420 [2024-12-16 05:30:08.428802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.428967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:28.420 [2024-12-16 05:30:08.428996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:28.420 passed 00:12:28.420 Test: blockdev nvme admin passthru ...[2024-12-16 05:30:08.429138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:28.420 [2024-12-16 05:30:08.429173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:28.420 [2024-12-16 05:30:08.429318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:28.420 [2024-12-16 05:30:08.429353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:28.420 passed 00:12:28.420 Test: blockdev copy ...passed 00:12:28.420 00:12:28.420 Run Summary: Type Total Ran Passed Failed Inactive 00:12:28.420 suites 1 1 n/a 0 0 00:12:28.420 tests 23 23 23 0 0 00:12:28.420 asserts 152 152 152 0 n/a 00:12:28.420 00:12:28.420 Elapsed time = 0.295 seconds 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.355 rmmod nvme_tcp 00:12:29.355 rmmod nvme_fabrics 00:12:29.355 rmmod nvme_keyring 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 71294 ']' 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 71294 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 71294 ']' 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 71294 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.355 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71294 00:12:29.614 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:29.614 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:29.614 killing process with pid 71294 00:12:29.614 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71294' 00:12:29.614 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 71294 00:12:29.614 05:30:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 71294 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:30.549 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.809 05:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.809 ************************************ 00:12:30.809 END TEST nvmf_bdevio 00:12:30.809 ************************************ 00:12:30.809 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:30.809 00:12:30.809 real 0m5.380s 00:12:30.809 user 0m20.016s 00:12:30.809 sys 0m1.071s 00:12:30.809 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.809 05:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.809 05:30:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:31.068 ************************************ 00:12:31.068 END TEST nvmf_target_core 00:12:31.068 ************************************ 00:12:31.068 00:12:31.068 real 2m56.365s 00:12:31.068 user 7m50.765s 00:12:31.068 sys 0m53.782s 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:31.068 05:30:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:31.068 05:30:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.068 05:30:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.068 05:30:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.068 ************************************ 00:12:31.068 START TEST nvmf_target_extra 00:12:31.068 ************************************ 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:31.068 * Looking for test storage... 00:12:31.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:31.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.068 --rc genhtml_branch_coverage=1 00:12:31.068 --rc genhtml_function_coverage=1 00:12:31.068 --rc genhtml_legend=1 00:12:31.068 --rc geninfo_all_blocks=1 00:12:31.068 --rc geninfo_unexecuted_blocks=1 00:12:31.068 00:12:31.068 ' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:31.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.068 --rc genhtml_branch_coverage=1 00:12:31.068 --rc genhtml_function_coverage=1 00:12:31.068 --rc genhtml_legend=1 00:12:31.068 --rc geninfo_all_blocks=1 00:12:31.068 --rc geninfo_unexecuted_blocks=1 00:12:31.068 00:12:31.068 ' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:31.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.068 --rc genhtml_branch_coverage=1 00:12:31.068 --rc genhtml_function_coverage=1 00:12:31.068 --rc genhtml_legend=1 00:12:31.068 --rc geninfo_all_blocks=1 00:12:31.068 --rc geninfo_unexecuted_blocks=1 00:12:31.068 00:12:31.068 ' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:31.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.068 --rc genhtml_branch_coverage=1 00:12:31.068 --rc genhtml_function_coverage=1 00:12:31.068 --rc genhtml_legend=1 00:12:31.068 --rc geninfo_all_blocks=1 00:12:31.068 --rc geninfo_unexecuted_blocks=1 00:12:31.068 00:12:31.068 ' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.068 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.069 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.327 ************************************ 00:12:31.327 START TEST nvmf_auth_target 00:12:31.327 ************************************ 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:31.327 * Looking for test storage... 00:12:31.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.327 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.328 --rc genhtml_branch_coverage=1 00:12:31.328 --rc genhtml_function_coverage=1 00:12:31.328 --rc genhtml_legend=1 00:12:31.328 --rc geninfo_all_blocks=1 00:12:31.328 --rc geninfo_unexecuted_blocks=1 00:12:31.328 00:12:31.328 ' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.328 --rc genhtml_branch_coverage=1 00:12:31.328 --rc genhtml_function_coverage=1 00:12:31.328 --rc genhtml_legend=1 00:12:31.328 --rc geninfo_all_blocks=1 00:12:31.328 --rc geninfo_unexecuted_blocks=1 00:12:31.328 00:12:31.328 ' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.328 --rc genhtml_branch_coverage=1 00:12:31.328 --rc genhtml_function_coverage=1 00:12:31.328 --rc genhtml_legend=1 00:12:31.328 --rc geninfo_all_blocks=1 00:12:31.328 --rc geninfo_unexecuted_blocks=1 00:12:31.328 00:12:31.328 ' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.328 --rc genhtml_branch_coverage=1 00:12:31.328 --rc genhtml_function_coverage=1 00:12:31.328 --rc genhtml_legend=1 00:12:31.328 --rc geninfo_all_blocks=1 00:12:31.328 --rc geninfo_unexecuted_blocks=1 00:12:31.328 00:12:31.328 ' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.328 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:31.328 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:31.329 Cannot find device "nvmf_init_br" 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:31.329 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:31.587 Cannot find device "nvmf_init_br2" 00:12:31.587 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:31.588 Cannot find device "nvmf_tgt_br" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.588 Cannot find device "nvmf_tgt_br2" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:31.588 Cannot find device "nvmf_init_br" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:31.588 Cannot find device "nvmf_init_br2" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:31.588 Cannot find device "nvmf_tgt_br" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:31.588 Cannot find device "nvmf_tgt_br2" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:31.588 Cannot find device "nvmf_br" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:31.588 Cannot find device "nvmf_init_if" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:31.588 Cannot find device "nvmf_init_if2" 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.588 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.588 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:31.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:31.847 00:12:31.847 --- 10.0.0.3 ping statistics --- 00:12:31.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.847 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:31.847 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:31.847 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:12:31.847 00:12:31.847 --- 10.0.0.4 ping statistics --- 00:12:31.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.847 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:31.847 00:12:31.847 --- 10.0.0.1 ping statistics --- 00:12:31.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.847 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:31.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:12:31.847 00:12:31.847 --- 10.0.0.2 ping statistics --- 00:12:31.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.847 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=71669 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 71669 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71669 ']' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.847 05:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.785 05:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.785 05:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:32.785 05:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.785 05:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.785 05:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=71701 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1e5c42aadb5061458e1a17253d38579957a0e90678e54fc2 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UYO 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1e5c42aadb5061458e1a17253d38579957a0e90678e54fc2 0 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1e5c42aadb5061458e1a17253d38579957a0e90678e54fc2 0 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1e5c42aadb5061458e1a17253d38579957a0e90678e54fc2 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:32.785 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UYO 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UYO 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.UYO 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=94b948e85c7d4f2260d4b7fb45d768ba1287703a95c1122132759628165d4a6a 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.w1E 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 94b948e85c7d4f2260d4b7fb45d768ba1287703a95c1122132759628165d4a6a 3 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 94b948e85c7d4f2260d4b7fb45d768ba1287703a95c1122132759628165d4a6a 3 00:12:33.045 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=94b948e85c7d4f2260d4b7fb45d768ba1287703a95c1122132759628165d4a6a 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.w1E 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.w1E 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.w1E 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fea1befa98245a197f1baf0f4cf8c647 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0a6 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fea1befa98245a197f1baf0f4cf8c647 1 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fea1befa98245a197f1baf0f4cf8c647 1 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fea1befa98245a197f1baf0f4cf8c647 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0a6 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0a6 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.0a6 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=922cee07c7d673d468d6131b1da8582104a19bf028390cdc 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nW5 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 922cee07c7d673d468d6131b1da8582104a19bf028390cdc 2 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 922cee07c7d673d468d6131b1da8582104a19bf028390cdc 2 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=922cee07c7d673d468d6131b1da8582104a19bf028390cdc 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nW5 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nW5 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.nW5 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df5e2eb21e0d979c8514dd6b22271c1a07ef0c4b742667bc 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.u4V 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df5e2eb21e0d979c8514dd6b22271c1a07ef0c4b742667bc 2 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df5e2eb21e0d979c8514dd6b22271c1a07ef0c4b742667bc 2 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df5e2eb21e0d979c8514dd6b22271c1a07ef0c4b742667bc 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:33.046 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.u4V 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.u4V 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.u4V 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=53901a49ad14b90f7a14c24c2fe7cda5 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RgB 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 53901a49ad14b90f7a14c24c2fe7cda5 1 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 53901a49ad14b90f7a14c24c2fe7cda5 1 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=53901a49ad14b90f7a14c24c2fe7cda5 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RgB 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RgB 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.RgB 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=becd98884f994bed9497ef0333bbd6bd55d7568c017a79e727ac7422f7fcec06 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LwZ 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key becd98884f994bed9497ef0333bbd6bd55d7568c017a79e727ac7422f7fcec06 3 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 becd98884f994bed9497ef0333bbd6bd55d7568c017a79e727ac7422f7fcec06 3 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=becd98884f994bed9497ef0333bbd6bd55d7568c017a79e727ac7422f7fcec06 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LwZ 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LwZ 00:12:33.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.LwZ 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 71669 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71669 ']' 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.306 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 71701 /var/tmp/host.sock 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 71701 ']' 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.565 05:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UYO 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UYO 00:12:34.134 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UYO 00:12:34.394 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.w1E ]] 00:12:34.394 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w1E 00:12:34.394 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.394 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.394 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.394 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w1E 00:12:34.394 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w1E 00:12:34.962 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:34.962 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0a6 00:12:34.962 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.962 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.962 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.962 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0a6 00:12:34.962 05:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0a6 00:12:35.231 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.nW5 ]] 00:12:35.231 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nW5 00:12:35.231 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.231 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.231 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.231 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nW5 00:12:35.231 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nW5 00:12:35.510 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:35.510 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.u4V 00:12:35.510 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.510 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.510 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.510 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.u4V 00:12:35.510 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.u4V 00:12:35.769 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.RgB ]] 00:12:35.769 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RgB 00:12:35.769 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.769 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.769 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.769 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RgB 00:12:35.769 05:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RgB 00:12:36.028 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:36.028 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LwZ 00:12:36.028 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.028 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.028 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.028 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LwZ 00:12:36.028 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LwZ 00:12:36.286 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:36.286 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:36.286 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.286 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.286 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:36.286 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.544 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.803 00:12:36.803 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.803 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.803 05:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.062 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.062 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.062 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.062 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.062 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.062 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.062 { 00:12:37.062 "cntlid": 1, 00:12:37.062 "qid": 0, 00:12:37.062 "state": "enabled", 00:12:37.062 "thread": "nvmf_tgt_poll_group_000", 00:12:37.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:37.062 "listen_address": { 00:12:37.062 "trtype": "TCP", 00:12:37.062 "adrfam": "IPv4", 00:12:37.062 "traddr": "10.0.0.3", 00:12:37.062 "trsvcid": "4420" 00:12:37.062 }, 00:12:37.062 "peer_address": { 00:12:37.062 "trtype": "TCP", 00:12:37.062 "adrfam": "IPv4", 00:12:37.062 "traddr": "10.0.0.1", 00:12:37.062 "trsvcid": "51640" 00:12:37.062 }, 00:12:37.062 "auth": { 00:12:37.062 "state": "completed", 00:12:37.062 "digest": "sha256", 00:12:37.062 "dhgroup": "null" 00:12:37.062 } 00:12:37.062 } 00:12:37.062 ]' 00:12:37.062 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.321 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.321 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.321 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:37.321 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.321 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.321 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.321 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.580 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:12:37.580 05:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:41.772 05:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:42.031 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:42.031 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.031 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.031 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:42.031 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.031 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.031 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.032 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.032 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.032 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.032 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.032 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.032 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.291 00:12:42.291 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.291 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.291 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.550 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.550 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.550 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.550 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.550 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.550 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.550 { 00:12:42.550 "cntlid": 3, 00:12:42.550 "qid": 0, 00:12:42.550 "state": "enabled", 00:12:42.550 "thread": "nvmf_tgt_poll_group_000", 00:12:42.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:42.550 "listen_address": { 00:12:42.550 "trtype": "TCP", 00:12:42.550 "adrfam": "IPv4", 00:12:42.550 "traddr": "10.0.0.3", 00:12:42.550 "trsvcid": "4420" 00:12:42.550 }, 00:12:42.550 "peer_address": { 00:12:42.550 "trtype": "TCP", 00:12:42.550 "adrfam": "IPv4", 00:12:42.550 "traddr": "10.0.0.1", 00:12:42.550 "trsvcid": "51662" 00:12:42.550 }, 00:12:42.550 "auth": { 00:12:42.550 "state": "completed", 00:12:42.550 "digest": "sha256", 00:12:42.550 "dhgroup": "null" 00:12:42.550 } 00:12:42.550 } 00:12:42.550 ]' 00:12:42.550 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.809 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.809 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.809 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:42.809 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.809 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.809 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.809 05:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.069 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:12:43.069 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:12:43.637 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.637 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:43.637 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.637 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.895 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.895 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.895 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:43.895 05:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.155 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.414 00:12:44.414 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.414 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.414 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.673 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.674 { 00:12:44.674 "cntlid": 5, 00:12:44.674 "qid": 0, 00:12:44.674 "state": "enabled", 00:12:44.674 "thread": "nvmf_tgt_poll_group_000", 00:12:44.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:44.674 "listen_address": { 00:12:44.674 "trtype": "TCP", 00:12:44.674 "adrfam": "IPv4", 00:12:44.674 "traddr": "10.0.0.3", 00:12:44.674 "trsvcid": "4420" 00:12:44.674 }, 00:12:44.674 "peer_address": { 00:12:44.674 "trtype": "TCP", 00:12:44.674 "adrfam": "IPv4", 00:12:44.674 "traddr": "10.0.0.1", 00:12:44.674 "trsvcid": "51686" 00:12:44.674 }, 00:12:44.674 "auth": { 00:12:44.674 "state": "completed", 00:12:44.674 "digest": "sha256", 00:12:44.674 "dhgroup": "null" 00:12:44.674 } 00:12:44.674 } 00:12:44.674 ]' 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.674 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.933 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:44.933 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.933 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.933 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.933 05:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.192 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:12:45.192 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:45.759 05:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:46.018 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:46.018 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.018 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.018 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:46.018 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.018 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.018 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:12:46.019 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.019 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.019 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.019 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.019 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.019 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.588 00:12:46.588 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.588 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.588 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.588 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.588 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.588 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.589 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.847 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.847 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.847 { 00:12:46.847 "cntlid": 7, 00:12:46.847 "qid": 0, 00:12:46.847 "state": "enabled", 00:12:46.847 "thread": "nvmf_tgt_poll_group_000", 00:12:46.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:46.847 "listen_address": { 00:12:46.847 "trtype": "TCP", 00:12:46.847 "adrfam": "IPv4", 00:12:46.847 "traddr": "10.0.0.3", 00:12:46.847 "trsvcid": "4420" 00:12:46.847 }, 00:12:46.847 "peer_address": { 00:12:46.847 "trtype": "TCP", 00:12:46.847 "adrfam": "IPv4", 00:12:46.847 "traddr": "10.0.0.1", 00:12:46.847 "trsvcid": "56480" 00:12:46.847 }, 00:12:46.847 "auth": { 00:12:46.847 "state": "completed", 00:12:46.847 "digest": "sha256", 00:12:46.847 "dhgroup": "null" 00:12:46.847 } 00:12:46.847 } 00:12:46.848 ]' 00:12:46.848 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.848 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.848 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.848 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:46.848 05:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.848 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.848 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.848 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.106 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:12:47.106 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:48.043 05:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.043 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.302 00:12:48.584 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.584 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.584 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.846 { 00:12:48.846 "cntlid": 9, 00:12:48.846 "qid": 0, 00:12:48.846 "state": "enabled", 00:12:48.846 "thread": "nvmf_tgt_poll_group_000", 00:12:48.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:48.846 "listen_address": { 00:12:48.846 "trtype": "TCP", 00:12:48.846 "adrfam": "IPv4", 00:12:48.846 "traddr": "10.0.0.3", 00:12:48.846 "trsvcid": "4420" 00:12:48.846 }, 00:12:48.846 "peer_address": { 00:12:48.846 "trtype": "TCP", 00:12:48.846 "adrfam": "IPv4", 00:12:48.846 "traddr": "10.0.0.1", 00:12:48.846 "trsvcid": "56506" 00:12:48.846 }, 00:12:48.846 "auth": { 00:12:48.846 "state": "completed", 00:12:48.846 "digest": "sha256", 00:12:48.846 "dhgroup": "ffdhe2048" 00:12:48.846 } 00:12:48.846 } 00:12:48.846 ]' 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.846 05:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.846 05:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.846 05:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.846 05:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.846 05:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.846 05:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.414 05:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:12:49.414 05:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:49.982 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.241 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.809 00:12:50.809 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.809 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.809 05:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.068 { 00:12:51.068 "cntlid": 11, 00:12:51.068 "qid": 0, 00:12:51.068 "state": "enabled", 00:12:51.068 "thread": "nvmf_tgt_poll_group_000", 00:12:51.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:51.068 "listen_address": { 00:12:51.068 "trtype": "TCP", 00:12:51.068 "adrfam": "IPv4", 00:12:51.068 "traddr": "10.0.0.3", 00:12:51.068 "trsvcid": "4420" 00:12:51.068 }, 00:12:51.068 "peer_address": { 00:12:51.068 "trtype": "TCP", 00:12:51.068 "adrfam": "IPv4", 00:12:51.068 "traddr": "10.0.0.1", 00:12:51.068 "trsvcid": "56532" 00:12:51.068 }, 00:12:51.068 "auth": { 00:12:51.068 "state": "completed", 00:12:51.068 "digest": "sha256", 00:12:51.068 "dhgroup": "ffdhe2048" 00:12:51.068 } 00:12:51.068 } 00:12:51.068 ]' 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.068 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.636 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:12:51.636 05:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:52.205 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.464 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.723 00:12:52.723 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.723 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.723 05:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.981 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.981 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.981 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.981 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.981 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.981 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.981 { 00:12:52.981 "cntlid": 13, 00:12:52.981 "qid": 0, 00:12:52.981 "state": "enabled", 00:12:52.981 "thread": "nvmf_tgt_poll_group_000", 00:12:52.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:52.981 "listen_address": { 00:12:52.981 "trtype": "TCP", 00:12:52.981 "adrfam": "IPv4", 00:12:52.981 "traddr": "10.0.0.3", 00:12:52.981 "trsvcid": "4420" 00:12:52.981 }, 00:12:52.981 "peer_address": { 00:12:52.981 "trtype": "TCP", 00:12:52.981 "adrfam": "IPv4", 00:12:52.981 "traddr": "10.0.0.1", 00:12:52.981 "trsvcid": "56548" 00:12:52.981 }, 00:12:52.981 "auth": { 00:12:52.981 "state": "completed", 00:12:52.981 "digest": "sha256", 00:12:52.981 "dhgroup": "ffdhe2048" 00:12:52.981 } 00:12:52.981 } 00:12:52.981 ]' 00:12:52.981 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.240 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.240 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.240 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:53.240 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.240 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.240 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.240 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.498 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:12:53.498 05:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:54.065 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.323 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.324 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.324 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.891 00:12:54.891 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.891 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.891 05:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.149 { 00:12:55.149 "cntlid": 15, 00:12:55.149 "qid": 0, 00:12:55.149 "state": "enabled", 00:12:55.149 "thread": "nvmf_tgt_poll_group_000", 00:12:55.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:55.149 "listen_address": { 00:12:55.149 "trtype": "TCP", 00:12:55.149 "adrfam": "IPv4", 00:12:55.149 "traddr": "10.0.0.3", 00:12:55.149 "trsvcid": "4420" 00:12:55.149 }, 00:12:55.149 "peer_address": { 00:12:55.149 "trtype": "TCP", 00:12:55.149 "adrfam": "IPv4", 00:12:55.149 "traddr": "10.0.0.1", 00:12:55.149 "trsvcid": "56758" 00:12:55.149 }, 00:12:55.149 "auth": { 00:12:55.149 "state": "completed", 00:12:55.149 "digest": "sha256", 00:12:55.149 "dhgroup": "ffdhe2048" 00:12:55.149 } 00:12:55.149 } 00:12:55.149 ]' 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.149 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.407 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:12:55.407 05:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:55.975 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.234 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.802 00:12:56.802 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.802 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.802 05:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.060 { 00:12:57.060 "cntlid": 17, 00:12:57.060 "qid": 0, 00:12:57.060 "state": "enabled", 00:12:57.060 "thread": "nvmf_tgt_poll_group_000", 00:12:57.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:57.060 "listen_address": { 00:12:57.060 "trtype": "TCP", 00:12:57.060 "adrfam": "IPv4", 00:12:57.060 "traddr": "10.0.0.3", 00:12:57.060 "trsvcid": "4420" 00:12:57.060 }, 00:12:57.060 "peer_address": { 00:12:57.060 "trtype": "TCP", 00:12:57.060 "adrfam": "IPv4", 00:12:57.060 "traddr": "10.0.0.1", 00:12:57.060 "trsvcid": "56792" 00:12:57.060 }, 00:12:57.060 "auth": { 00:12:57.060 "state": "completed", 00:12:57.060 "digest": "sha256", 00:12:57.060 "dhgroup": "ffdhe3072" 00:12:57.060 } 00:12:57.060 } 00:12:57.060 ]' 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.060 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.061 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.061 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.061 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.061 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.627 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:12:57.627 05:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:58.194 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.453 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.713 00:12:58.971 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.971 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.971 05:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.231 { 00:12:59.231 "cntlid": 19, 00:12:59.231 "qid": 0, 00:12:59.231 "state": "enabled", 00:12:59.231 "thread": "nvmf_tgt_poll_group_000", 00:12:59.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:12:59.231 "listen_address": { 00:12:59.231 "trtype": "TCP", 00:12:59.231 "adrfam": "IPv4", 00:12:59.231 "traddr": "10.0.0.3", 00:12:59.231 "trsvcid": "4420" 00:12:59.231 }, 00:12:59.231 "peer_address": { 00:12:59.231 "trtype": "TCP", 00:12:59.231 "adrfam": "IPv4", 00:12:59.231 "traddr": "10.0.0.1", 00:12:59.231 "trsvcid": "56818" 00:12:59.231 }, 00:12:59.231 "auth": { 00:12:59.231 "state": "completed", 00:12:59.231 "digest": "sha256", 00:12:59.231 "dhgroup": "ffdhe3072" 00:12:59.231 } 00:12:59.231 } 00:12:59.231 ]' 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.231 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.490 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:12:59.490 05:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:00.427 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.686 05:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.003 00:13:01.003 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.003 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.003 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.262 { 00:13:01.262 "cntlid": 21, 00:13:01.262 "qid": 0, 00:13:01.262 "state": "enabled", 00:13:01.262 "thread": "nvmf_tgt_poll_group_000", 00:13:01.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:01.262 "listen_address": { 00:13:01.262 "trtype": "TCP", 00:13:01.262 "adrfam": "IPv4", 00:13:01.262 "traddr": "10.0.0.3", 00:13:01.262 "trsvcid": "4420" 00:13:01.262 }, 00:13:01.262 "peer_address": { 00:13:01.262 "trtype": "TCP", 00:13:01.262 "adrfam": "IPv4", 00:13:01.262 "traddr": "10.0.0.1", 00:13:01.262 "trsvcid": "56856" 00:13:01.262 }, 00:13:01.262 "auth": { 00:13:01.262 "state": "completed", 00:13:01.262 "digest": "sha256", 00:13:01.262 "dhgroup": "ffdhe3072" 00:13:01.262 } 00:13:01.262 } 00:13:01.262 ]' 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.262 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.522 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.522 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.522 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.781 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:01.781 05:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:02.350 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.918 05:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.177 00:13:03.177 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.177 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.177 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.436 { 00:13:03.436 "cntlid": 23, 00:13:03.436 "qid": 0, 00:13:03.436 "state": "enabled", 00:13:03.436 "thread": "nvmf_tgt_poll_group_000", 00:13:03.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:03.436 "listen_address": { 00:13:03.436 "trtype": "TCP", 00:13:03.436 "adrfam": "IPv4", 00:13:03.436 "traddr": "10.0.0.3", 00:13:03.436 "trsvcid": "4420" 00:13:03.436 }, 00:13:03.436 "peer_address": { 00:13:03.436 "trtype": "TCP", 00:13:03.436 "adrfam": "IPv4", 00:13:03.436 "traddr": "10.0.0.1", 00:13:03.436 "trsvcid": "56880" 00:13:03.436 }, 00:13:03.436 "auth": { 00:13:03.436 "state": "completed", 00:13:03.436 "digest": "sha256", 00:13:03.436 "dhgroup": "ffdhe3072" 00:13:03.436 } 00:13:03.436 } 00:13:03.436 ]' 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.436 05:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.004 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:04.004 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:04.572 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:04.573 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.832 05:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.089 00:13:05.348 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.348 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.348 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.606 { 00:13:05.606 "cntlid": 25, 00:13:05.606 "qid": 0, 00:13:05.606 "state": "enabled", 00:13:05.606 "thread": "nvmf_tgt_poll_group_000", 00:13:05.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:05.606 "listen_address": { 00:13:05.606 "trtype": "TCP", 00:13:05.606 "adrfam": "IPv4", 00:13:05.606 "traddr": "10.0.0.3", 00:13:05.606 "trsvcid": "4420" 00:13:05.606 }, 00:13:05.606 "peer_address": { 00:13:05.606 "trtype": "TCP", 00:13:05.606 "adrfam": "IPv4", 00:13:05.606 "traddr": "10.0.0.1", 00:13:05.606 "trsvcid": "38346" 00:13:05.606 }, 00:13:05.606 "auth": { 00:13:05.606 "state": "completed", 00:13:05.606 "digest": "sha256", 00:13:05.606 "dhgroup": "ffdhe4096" 00:13:05.606 } 00:13:05.606 } 00:13:05.606 ]' 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.606 05:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.865 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:05.865 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.802 05:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.802 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.802 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.802 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.802 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.370 00:13:07.370 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.370 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.370 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.629 { 00:13:07.629 "cntlid": 27, 00:13:07.629 "qid": 0, 00:13:07.629 "state": "enabled", 00:13:07.629 "thread": "nvmf_tgt_poll_group_000", 00:13:07.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:07.629 "listen_address": { 00:13:07.629 "trtype": "TCP", 00:13:07.629 "adrfam": "IPv4", 00:13:07.629 "traddr": "10.0.0.3", 00:13:07.629 "trsvcid": "4420" 00:13:07.629 }, 00:13:07.629 "peer_address": { 00:13:07.629 "trtype": "TCP", 00:13:07.629 "adrfam": "IPv4", 00:13:07.629 "traddr": "10.0.0.1", 00:13:07.629 "trsvcid": "38378" 00:13:07.629 }, 00:13:07.629 "auth": { 00:13:07.629 "state": "completed", 00:13:07.629 "digest": "sha256", 00:13:07.629 "dhgroup": "ffdhe4096" 00:13:07.629 } 00:13:07.629 } 00:13:07.629 ]' 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.629 05:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.888 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:07.888 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:08.824 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.824 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:08.825 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.825 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.825 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.825 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.825 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:08.825 05:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.083 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.342 00:13:09.342 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.342 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.342 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.601 { 00:13:09.601 "cntlid": 29, 00:13:09.601 "qid": 0, 00:13:09.601 "state": "enabled", 00:13:09.601 "thread": "nvmf_tgt_poll_group_000", 00:13:09.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:09.601 "listen_address": { 00:13:09.601 "trtype": "TCP", 00:13:09.601 "adrfam": "IPv4", 00:13:09.601 "traddr": "10.0.0.3", 00:13:09.601 "trsvcid": "4420" 00:13:09.601 }, 00:13:09.601 "peer_address": { 00:13:09.601 "trtype": "TCP", 00:13:09.601 "adrfam": "IPv4", 00:13:09.601 "traddr": "10.0.0.1", 00:13:09.601 "trsvcid": "38398" 00:13:09.601 }, 00:13:09.601 "auth": { 00:13:09.601 "state": "completed", 00:13:09.601 "digest": "sha256", 00:13:09.601 "dhgroup": "ffdhe4096" 00:13:09.601 } 00:13:09.601 } 00:13:09.601 ]' 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.601 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.859 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:09.859 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.859 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.859 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.859 05:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.137 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:10.137 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:10.704 05:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.962 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.529 00:13:11.529 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.529 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.529 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.787 { 00:13:11.787 "cntlid": 31, 00:13:11.787 "qid": 0, 00:13:11.787 "state": "enabled", 00:13:11.787 "thread": "nvmf_tgt_poll_group_000", 00:13:11.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:11.787 "listen_address": { 00:13:11.787 "trtype": "TCP", 00:13:11.787 "adrfam": "IPv4", 00:13:11.787 "traddr": "10.0.0.3", 00:13:11.787 "trsvcid": "4420" 00:13:11.787 }, 00:13:11.787 "peer_address": { 00:13:11.787 "trtype": "TCP", 00:13:11.787 "adrfam": "IPv4", 00:13:11.787 "traddr": "10.0.0.1", 00:13:11.787 "trsvcid": "38426" 00:13:11.787 }, 00:13:11.787 "auth": { 00:13:11.787 "state": "completed", 00:13:11.787 "digest": "sha256", 00:13:11.787 "dhgroup": "ffdhe4096" 00:13:11.787 } 00:13:11.787 } 00:13:11.787 ]' 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.787 05:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.046 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:12.046 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:12.981 05:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.239 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.240 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.537 00:13:13.537 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.537 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.537 05:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.795 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.795 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.795 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.795 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.795 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.795 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.795 { 00:13:13.795 "cntlid": 33, 00:13:13.795 "qid": 0, 00:13:13.795 "state": "enabled", 00:13:13.795 "thread": "nvmf_tgt_poll_group_000", 00:13:13.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:13.795 "listen_address": { 00:13:13.795 "trtype": "TCP", 00:13:13.795 "adrfam": "IPv4", 00:13:13.795 "traddr": "10.0.0.3", 00:13:13.795 "trsvcid": "4420" 00:13:13.795 }, 00:13:13.795 "peer_address": { 00:13:13.795 "trtype": "TCP", 00:13:13.795 "adrfam": "IPv4", 00:13:13.795 "traddr": "10.0.0.1", 00:13:13.795 "trsvcid": "38448" 00:13:13.795 }, 00:13:13.795 "auth": { 00:13:13.795 "state": "completed", 00:13:13.795 "digest": "sha256", 00:13:13.795 "dhgroup": "ffdhe6144" 00:13:13.795 } 00:13:13.795 } 00:13:13.795 ]' 00:13:13.795 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.053 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.053 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.053 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:14.053 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.053 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.054 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.054 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.312 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:14.312 05:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:15.247 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:15.504 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:15.504 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.504 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.504 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:15.504 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.505 05:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.767 00:13:16.026 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.026 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.026 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.284 { 00:13:16.284 "cntlid": 35, 00:13:16.284 "qid": 0, 00:13:16.284 "state": "enabled", 00:13:16.284 "thread": "nvmf_tgt_poll_group_000", 00:13:16.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:16.284 "listen_address": { 00:13:16.284 "trtype": "TCP", 00:13:16.284 "adrfam": "IPv4", 00:13:16.284 "traddr": "10.0.0.3", 00:13:16.284 "trsvcid": "4420" 00:13:16.284 }, 00:13:16.284 "peer_address": { 00:13:16.284 "trtype": "TCP", 00:13:16.284 "adrfam": "IPv4", 00:13:16.284 "traddr": "10.0.0.1", 00:13:16.284 "trsvcid": "60888" 00:13:16.284 }, 00:13:16.284 "auth": { 00:13:16.284 "state": "completed", 00:13:16.284 "digest": "sha256", 00:13:16.284 "dhgroup": "ffdhe6144" 00:13:16.284 } 00:13:16.284 } 00:13:16.284 ]' 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.284 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.851 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:16.851 05:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:17.418 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.677 05:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.244 00:13:18.244 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.244 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.244 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.503 { 00:13:18.503 "cntlid": 37, 00:13:18.503 "qid": 0, 00:13:18.503 "state": "enabled", 00:13:18.503 "thread": "nvmf_tgt_poll_group_000", 00:13:18.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:18.503 "listen_address": { 00:13:18.503 "trtype": "TCP", 00:13:18.503 "adrfam": "IPv4", 00:13:18.503 "traddr": "10.0.0.3", 00:13:18.503 "trsvcid": "4420" 00:13:18.503 }, 00:13:18.503 "peer_address": { 00:13:18.503 "trtype": "TCP", 00:13:18.503 "adrfam": "IPv4", 00:13:18.503 "traddr": "10.0.0.1", 00:13:18.503 "trsvcid": "60896" 00:13:18.503 }, 00:13:18.503 "auth": { 00:13:18.503 "state": "completed", 00:13:18.503 "digest": "sha256", 00:13:18.503 "dhgroup": "ffdhe6144" 00:13:18.503 } 00:13:18.503 } 00:13:18.503 ]' 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.503 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.761 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.761 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.761 05:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.020 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:19.020 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:19.587 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.846 05:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.846 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.846 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:19.846 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.846 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:20.413 00:13:20.413 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.413 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.413 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.672 { 00:13:20.672 "cntlid": 39, 00:13:20.672 "qid": 0, 00:13:20.672 "state": "enabled", 00:13:20.672 "thread": "nvmf_tgt_poll_group_000", 00:13:20.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:20.672 "listen_address": { 00:13:20.672 "trtype": "TCP", 00:13:20.672 "adrfam": "IPv4", 00:13:20.672 "traddr": "10.0.0.3", 00:13:20.672 "trsvcid": "4420" 00:13:20.672 }, 00:13:20.672 "peer_address": { 00:13:20.672 "trtype": "TCP", 00:13:20.672 "adrfam": "IPv4", 00:13:20.672 "traddr": "10.0.0.1", 00:13:20.672 "trsvcid": "60916" 00:13:20.672 }, 00:13:20.672 "auth": { 00:13:20.672 "state": "completed", 00:13:20.672 "digest": "sha256", 00:13:20.672 "dhgroup": "ffdhe6144" 00:13:20.672 } 00:13:20.672 } 00:13:20.672 ]' 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.672 05:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.239 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:21.239 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:21.806 05:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.806 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.065 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.065 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.065 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.065 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.632 00:13:22.632 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.632 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.632 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.632 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.891 { 00:13:22.891 "cntlid": 41, 00:13:22.891 "qid": 0, 00:13:22.891 "state": "enabled", 00:13:22.891 "thread": "nvmf_tgt_poll_group_000", 00:13:22.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:22.891 "listen_address": { 00:13:22.891 "trtype": "TCP", 00:13:22.891 "adrfam": "IPv4", 00:13:22.891 "traddr": "10.0.0.3", 00:13:22.891 "trsvcid": "4420" 00:13:22.891 }, 00:13:22.891 "peer_address": { 00:13:22.891 "trtype": "TCP", 00:13:22.891 "adrfam": "IPv4", 00:13:22.891 "traddr": "10.0.0.1", 00:13:22.891 "trsvcid": "60946" 00:13:22.891 }, 00:13:22.891 "auth": { 00:13:22.891 "state": "completed", 00:13:22.891 "digest": "sha256", 00:13:22.891 "dhgroup": "ffdhe8192" 00:13:22.891 } 00:13:22.891 } 00:13:22.891 ]' 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.891 05:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.891 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.891 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.891 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.891 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.891 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.150 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:23.150 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:23.717 05:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.285 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.852 00:13:24.852 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.852 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.852 05:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.110 { 00:13:25.110 "cntlid": 43, 00:13:25.110 "qid": 0, 00:13:25.110 "state": "enabled", 00:13:25.110 "thread": "nvmf_tgt_poll_group_000", 00:13:25.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:25.110 "listen_address": { 00:13:25.110 "trtype": "TCP", 00:13:25.110 "adrfam": "IPv4", 00:13:25.110 "traddr": "10.0.0.3", 00:13:25.110 "trsvcid": "4420" 00:13:25.110 }, 00:13:25.110 "peer_address": { 00:13:25.110 "trtype": "TCP", 00:13:25.110 "adrfam": "IPv4", 00:13:25.110 "traddr": "10.0.0.1", 00:13:25.110 "trsvcid": "60962" 00:13:25.110 }, 00:13:25.110 "auth": { 00:13:25.110 "state": "completed", 00:13:25.110 "digest": "sha256", 00:13:25.110 "dhgroup": "ffdhe8192" 00:13:25.110 } 00:13:25.110 } 00:13:25.110 ]' 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.110 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.368 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:25.368 05:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.325 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.584 05:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.151 00:13:27.151 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.151 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.151 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.409 { 00:13:27.409 "cntlid": 45, 00:13:27.409 "qid": 0, 00:13:27.409 "state": "enabled", 00:13:27.409 "thread": "nvmf_tgt_poll_group_000", 00:13:27.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:27.409 "listen_address": { 00:13:27.409 "trtype": "TCP", 00:13:27.409 "adrfam": "IPv4", 00:13:27.409 "traddr": "10.0.0.3", 00:13:27.409 "trsvcid": "4420" 00:13:27.409 }, 00:13:27.409 "peer_address": { 00:13:27.409 "trtype": "TCP", 00:13:27.409 "adrfam": "IPv4", 00:13:27.409 "traddr": "10.0.0.1", 00:13:27.409 "trsvcid": "52624" 00:13:27.409 }, 00:13:27.409 "auth": { 00:13:27.409 "state": "completed", 00:13:27.409 "digest": "sha256", 00:13:27.409 "dhgroup": "ffdhe8192" 00:13:27.409 } 00:13:27.409 } 00:13:27.409 ]' 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:27.409 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.668 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.668 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.668 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.927 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:27.927 05:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:28.494 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.494 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:28.494 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.494 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.495 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.495 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.495 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.495 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:28.754 05:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.322 00:13:29.580 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.580 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.580 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.839 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.840 { 00:13:29.840 "cntlid": 47, 00:13:29.840 "qid": 0, 00:13:29.840 "state": "enabled", 00:13:29.840 "thread": "nvmf_tgt_poll_group_000", 00:13:29.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:29.840 "listen_address": { 00:13:29.840 "trtype": "TCP", 00:13:29.840 "adrfam": "IPv4", 00:13:29.840 "traddr": "10.0.0.3", 00:13:29.840 "trsvcid": "4420" 00:13:29.840 }, 00:13:29.840 "peer_address": { 00:13:29.840 "trtype": "TCP", 00:13:29.840 "adrfam": "IPv4", 00:13:29.840 "traddr": "10.0.0.1", 00:13:29.840 "trsvcid": "52646" 00:13:29.840 }, 00:13:29.840 "auth": { 00:13:29.840 "state": "completed", 00:13:29.840 "digest": "sha256", 00:13:29.840 "dhgroup": "ffdhe8192" 00:13:29.840 } 00:13:29.840 } 00:13:29.840 ]' 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.840 05:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.840 05:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.840 05:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.840 05:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.840 05:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.840 05:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.408 05:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:30.408 05:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:30.975 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.234 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.494 00:13:31.494 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.494 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.494 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.752 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.752 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.752 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.752 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.752 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.752 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.752 { 00:13:31.752 "cntlid": 49, 00:13:31.752 "qid": 0, 00:13:31.752 "state": "enabled", 00:13:31.752 "thread": "nvmf_tgt_poll_group_000", 00:13:31.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:31.752 "listen_address": { 00:13:31.752 "trtype": "TCP", 00:13:31.752 "adrfam": "IPv4", 00:13:31.753 "traddr": "10.0.0.3", 00:13:31.753 "trsvcid": "4420" 00:13:31.753 }, 00:13:31.753 "peer_address": { 00:13:31.753 "trtype": "TCP", 00:13:31.753 "adrfam": "IPv4", 00:13:31.753 "traddr": "10.0.0.1", 00:13:31.753 "trsvcid": "52678" 00:13:31.753 }, 00:13:31.753 "auth": { 00:13:31.753 "state": "completed", 00:13:31.753 "digest": "sha384", 00:13:31.753 "dhgroup": "null" 00:13:31.753 } 00:13:31.753 } 00:13:31.753 ]' 00:13:31.753 05:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.012 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.012 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.012 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:32.012 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.012 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.012 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.012 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.270 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:32.270 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:32.837 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.837 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:32.837 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.837 05:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.837 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.837 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.837 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:32.837 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.096 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.355 00:13:33.614 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.614 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.614 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.873 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.873 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.873 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.873 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.873 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.873 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.873 { 00:13:33.873 "cntlid": 51, 00:13:33.873 "qid": 0, 00:13:33.873 "state": "enabled", 00:13:33.873 "thread": "nvmf_tgt_poll_group_000", 00:13:33.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:33.873 "listen_address": { 00:13:33.873 "trtype": "TCP", 00:13:33.873 "adrfam": "IPv4", 00:13:33.873 "traddr": "10.0.0.3", 00:13:33.873 "trsvcid": "4420" 00:13:33.873 }, 00:13:33.873 "peer_address": { 00:13:33.873 "trtype": "TCP", 00:13:33.873 "adrfam": "IPv4", 00:13:33.873 "traddr": "10.0.0.1", 00:13:33.873 "trsvcid": "52686" 00:13:33.873 }, 00:13:33.873 "auth": { 00:13:33.873 "state": "completed", 00:13:33.873 "digest": "sha384", 00:13:33.873 "dhgroup": "null" 00:13:33.873 } 00:13:33.873 } 00:13:33.873 ]' 00:13:33.873 05:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.873 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:33.873 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.873 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:33.873 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.873 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.873 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.873 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.132 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:34.132 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:34.701 05:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.269 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.528 00:13:35.528 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.528 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.528 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.787 { 00:13:35.787 "cntlid": 53, 00:13:35.787 "qid": 0, 00:13:35.787 "state": "enabled", 00:13:35.787 "thread": "nvmf_tgt_poll_group_000", 00:13:35.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:35.787 "listen_address": { 00:13:35.787 "trtype": "TCP", 00:13:35.787 "adrfam": "IPv4", 00:13:35.787 "traddr": "10.0.0.3", 00:13:35.787 "trsvcid": "4420" 00:13:35.787 }, 00:13:35.787 "peer_address": { 00:13:35.787 "trtype": "TCP", 00:13:35.787 "adrfam": "IPv4", 00:13:35.787 "traddr": "10.0.0.1", 00:13:35.787 "trsvcid": "58678" 00:13:35.787 }, 00:13:35.787 "auth": { 00:13:35.787 "state": "completed", 00:13:35.787 "digest": "sha384", 00:13:35.787 "dhgroup": "null" 00:13:35.787 } 00:13:35.787 } 00:13:35.787 ]' 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.787 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.788 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.788 05:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.047 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:36.047 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:36.984 05:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.244 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.503 00:13:37.503 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.503 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.503 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.762 { 00:13:37.762 "cntlid": 55, 00:13:37.762 "qid": 0, 00:13:37.762 "state": "enabled", 00:13:37.762 "thread": "nvmf_tgt_poll_group_000", 00:13:37.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:37.762 "listen_address": { 00:13:37.762 "trtype": "TCP", 00:13:37.762 "adrfam": "IPv4", 00:13:37.762 "traddr": "10.0.0.3", 00:13:37.762 "trsvcid": "4420" 00:13:37.762 }, 00:13:37.762 "peer_address": { 00:13:37.762 "trtype": "TCP", 00:13:37.762 "adrfam": "IPv4", 00:13:37.762 "traddr": "10.0.0.1", 00:13:37.762 "trsvcid": "58698" 00:13:37.762 }, 00:13:37.762 "auth": { 00:13:37.762 "state": "completed", 00:13:37.762 "digest": "sha384", 00:13:37.762 "dhgroup": "null" 00:13:37.762 } 00:13:37.762 } 00:13:37.762 ]' 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:37.762 05:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.021 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.021 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.021 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.281 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:38.281 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:38.854 05:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.113 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.372 00:13:39.372 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.372 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.372 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.632 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.632 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.632 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.632 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.632 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.632 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.632 { 00:13:39.632 "cntlid": 57, 00:13:39.632 "qid": 0, 00:13:39.632 "state": "enabled", 00:13:39.632 "thread": "nvmf_tgt_poll_group_000", 00:13:39.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:39.632 "listen_address": { 00:13:39.632 "trtype": "TCP", 00:13:39.632 "adrfam": "IPv4", 00:13:39.632 "traddr": "10.0.0.3", 00:13:39.632 "trsvcid": "4420" 00:13:39.632 }, 00:13:39.632 "peer_address": { 00:13:39.632 "trtype": "TCP", 00:13:39.632 "adrfam": "IPv4", 00:13:39.632 "traddr": "10.0.0.1", 00:13:39.632 "trsvcid": "58732" 00:13:39.632 }, 00:13:39.632 "auth": { 00:13:39.632 "state": "completed", 00:13:39.632 "digest": "sha384", 00:13:39.632 "dhgroup": "ffdhe2048" 00:13:39.632 } 00:13:39.632 } 00:13:39.632 ]' 00:13:39.632 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.891 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.891 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.891 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:39.891 05:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.892 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.892 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.892 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.151 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:40.151 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:40.719 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.719 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:40.719 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.719 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.719 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.720 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.720 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:40.720 05:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.978 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.547 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.547 { 00:13:41.547 "cntlid": 59, 00:13:41.547 "qid": 0, 00:13:41.547 "state": "enabled", 00:13:41.547 "thread": "nvmf_tgt_poll_group_000", 00:13:41.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:41.547 "listen_address": { 00:13:41.547 "trtype": "TCP", 00:13:41.547 "adrfam": "IPv4", 00:13:41.547 "traddr": "10.0.0.3", 00:13:41.547 "trsvcid": "4420" 00:13:41.547 }, 00:13:41.547 "peer_address": { 00:13:41.547 "trtype": "TCP", 00:13:41.547 "adrfam": "IPv4", 00:13:41.547 "traddr": "10.0.0.1", 00:13:41.547 "trsvcid": "58766" 00:13:41.547 }, 00:13:41.547 "auth": { 00:13:41.547 "state": "completed", 00:13:41.547 "digest": "sha384", 00:13:41.547 "dhgroup": "ffdhe2048" 00:13:41.547 } 00:13:41.547 } 00:13:41.547 ]' 00:13:41.547 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.806 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.806 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.806 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:41.806 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.806 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.806 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.806 05:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.065 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:42.065 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:42.634 05:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.893 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.461 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.461 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.461 { 00:13:43.461 "cntlid": 61, 00:13:43.461 "qid": 0, 00:13:43.461 "state": "enabled", 00:13:43.461 "thread": "nvmf_tgt_poll_group_000", 00:13:43.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:43.461 "listen_address": { 00:13:43.461 "trtype": "TCP", 00:13:43.461 "adrfam": "IPv4", 00:13:43.461 "traddr": "10.0.0.3", 00:13:43.461 "trsvcid": "4420" 00:13:43.461 }, 00:13:43.461 "peer_address": { 00:13:43.461 "trtype": "TCP", 00:13:43.461 "adrfam": "IPv4", 00:13:43.461 "traddr": "10.0.0.1", 00:13:43.461 "trsvcid": "58800" 00:13:43.461 }, 00:13:43.461 "auth": { 00:13:43.461 "state": "completed", 00:13:43.461 "digest": "sha384", 00:13:43.461 "dhgroup": "ffdhe2048" 00:13:43.461 } 00:13:43.461 } 00:13:43.461 ]' 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.721 05:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.980 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:43.980 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.548 05:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.116 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.375 00:13:45.375 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.375 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.375 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.634 { 00:13:45.634 "cntlid": 63, 00:13:45.634 "qid": 0, 00:13:45.634 "state": "enabled", 00:13:45.634 "thread": "nvmf_tgt_poll_group_000", 00:13:45.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:45.634 "listen_address": { 00:13:45.634 "trtype": "TCP", 00:13:45.634 "adrfam": "IPv4", 00:13:45.634 "traddr": "10.0.0.3", 00:13:45.634 "trsvcid": "4420" 00:13:45.634 }, 00:13:45.634 "peer_address": { 00:13:45.634 "trtype": "TCP", 00:13:45.634 "adrfam": "IPv4", 00:13:45.634 "traddr": "10.0.0.1", 00:13:45.634 "trsvcid": "41868" 00:13:45.634 }, 00:13:45.634 "auth": { 00:13:45.634 "state": "completed", 00:13:45.634 "digest": "sha384", 00:13:45.634 "dhgroup": "ffdhe2048" 00:13:45.634 } 00:13:45.634 } 00:13:45.634 ]' 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.634 05:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.893 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:45.893 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:46.830 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:46.831 05:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.090 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.349 00:13:47.349 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.349 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.349 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.608 { 00:13:47.608 "cntlid": 65, 00:13:47.608 "qid": 0, 00:13:47.608 "state": "enabled", 00:13:47.608 "thread": "nvmf_tgt_poll_group_000", 00:13:47.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:47.608 "listen_address": { 00:13:47.608 "trtype": "TCP", 00:13:47.608 "adrfam": "IPv4", 00:13:47.608 "traddr": "10.0.0.3", 00:13:47.608 "trsvcid": "4420" 00:13:47.608 }, 00:13:47.608 "peer_address": { 00:13:47.608 "trtype": "TCP", 00:13:47.608 "adrfam": "IPv4", 00:13:47.608 "traddr": "10.0.0.1", 00:13:47.608 "trsvcid": "41892" 00:13:47.608 }, 00:13:47.608 "auth": { 00:13:47.608 "state": "completed", 00:13:47.608 "digest": "sha384", 00:13:47.608 "dhgroup": "ffdhe3072" 00:13:47.608 } 00:13:47.608 } 00:13:47.608 ]' 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:47.608 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.867 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.867 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.867 05:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.126 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:48.126 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:48.691 05:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.949 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.208 00:13:49.208 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.208 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.208 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.466 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.466 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.466 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.466 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.466 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.466 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.466 { 00:13:49.466 "cntlid": 67, 00:13:49.466 "qid": 0, 00:13:49.466 "state": "enabled", 00:13:49.466 "thread": "nvmf_tgt_poll_group_000", 00:13:49.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:49.466 "listen_address": { 00:13:49.466 "trtype": "TCP", 00:13:49.466 "adrfam": "IPv4", 00:13:49.466 "traddr": "10.0.0.3", 00:13:49.466 "trsvcid": "4420" 00:13:49.466 }, 00:13:49.466 "peer_address": { 00:13:49.466 "trtype": "TCP", 00:13:49.466 "adrfam": "IPv4", 00:13:49.466 "traddr": "10.0.0.1", 00:13:49.466 "trsvcid": "41914" 00:13:49.466 }, 00:13:49.466 "auth": { 00:13:49.466 "state": "completed", 00:13:49.466 "digest": "sha384", 00:13:49.466 "dhgroup": "ffdhe3072" 00:13:49.466 } 00:13:49.466 } 00:13:49.466 ]' 00:13:49.466 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.725 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.725 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.725 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:49.725 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.725 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.725 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.725 05:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.984 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:49.984 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:50.920 05:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.920 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.488 00:13:51.488 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.488 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.488 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.747 { 00:13:51.747 "cntlid": 69, 00:13:51.747 "qid": 0, 00:13:51.747 "state": "enabled", 00:13:51.747 "thread": "nvmf_tgt_poll_group_000", 00:13:51.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:51.747 "listen_address": { 00:13:51.747 "trtype": "TCP", 00:13:51.747 "adrfam": "IPv4", 00:13:51.747 "traddr": "10.0.0.3", 00:13:51.747 "trsvcid": "4420" 00:13:51.747 }, 00:13:51.747 "peer_address": { 00:13:51.747 "trtype": "TCP", 00:13:51.747 "adrfam": "IPv4", 00:13:51.747 "traddr": "10.0.0.1", 00:13:51.747 "trsvcid": "41954" 00:13:51.747 }, 00:13:51.747 "auth": { 00:13:51.747 "state": "completed", 00:13:51.747 "digest": "sha384", 00:13:51.747 "dhgroup": "ffdhe3072" 00:13:51.747 } 00:13:51.747 } 00:13:51.747 ]' 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:51.747 05:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.007 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.007 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.007 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.266 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:52.266 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:13:52.834 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.834 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:52.834 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.834 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.834 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.834 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.834 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:52.835 05:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.094 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.353 00:13:53.353 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.353 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.353 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.612 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.612 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.612 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.612 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.612 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.612 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.612 { 00:13:53.612 "cntlid": 71, 00:13:53.612 "qid": 0, 00:13:53.612 "state": "enabled", 00:13:53.612 "thread": "nvmf_tgt_poll_group_000", 00:13:53.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:53.612 "listen_address": { 00:13:53.612 "trtype": "TCP", 00:13:53.612 "adrfam": "IPv4", 00:13:53.612 "traddr": "10.0.0.3", 00:13:53.612 "trsvcid": "4420" 00:13:53.612 }, 00:13:53.612 "peer_address": { 00:13:53.612 "trtype": "TCP", 00:13:53.612 "adrfam": "IPv4", 00:13:53.612 "traddr": "10.0.0.1", 00:13:53.612 "trsvcid": "41994" 00:13:53.612 }, 00:13:53.612 "auth": { 00:13:53.612 "state": "completed", 00:13:53.612 "digest": "sha384", 00:13:53.612 "dhgroup": "ffdhe3072" 00:13:53.612 } 00:13:53.612 } 00:13:53.612 ]' 00:13:53.612 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.871 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.871 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.871 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:53.871 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.871 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.871 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.871 05:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.131 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:54.131 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:54.698 05:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.960 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.219 00:13:55.219 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.219 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.219 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.787 { 00:13:55.787 "cntlid": 73, 00:13:55.787 "qid": 0, 00:13:55.787 "state": "enabled", 00:13:55.787 "thread": "nvmf_tgt_poll_group_000", 00:13:55.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:55.787 "listen_address": { 00:13:55.787 "trtype": "TCP", 00:13:55.787 "adrfam": "IPv4", 00:13:55.787 "traddr": "10.0.0.3", 00:13:55.787 "trsvcid": "4420" 00:13:55.787 }, 00:13:55.787 "peer_address": { 00:13:55.787 "trtype": "TCP", 00:13:55.787 "adrfam": "IPv4", 00:13:55.787 "traddr": "10.0.0.1", 00:13:55.787 "trsvcid": "52432" 00:13:55.787 }, 00:13:55.787 "auth": { 00:13:55.787 "state": "completed", 00:13:55.787 "digest": "sha384", 00:13:55.787 "dhgroup": "ffdhe4096" 00:13:55.787 } 00:13:55.787 } 00:13:55.787 ]' 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.787 05:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.047 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:56.047 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:56.615 05:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.874 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.875 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.875 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.875 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.875 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.875 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.443 00:13:57.443 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.443 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.443 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.702 { 00:13:57.702 "cntlid": 75, 00:13:57.702 "qid": 0, 00:13:57.702 "state": "enabled", 00:13:57.702 "thread": "nvmf_tgt_poll_group_000", 00:13:57.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:57.702 "listen_address": { 00:13:57.702 "trtype": "TCP", 00:13:57.702 "adrfam": "IPv4", 00:13:57.702 "traddr": "10.0.0.3", 00:13:57.702 "trsvcid": "4420" 00:13:57.702 }, 00:13:57.702 "peer_address": { 00:13:57.702 "trtype": "TCP", 00:13:57.702 "adrfam": "IPv4", 00:13:57.702 "traddr": "10.0.0.1", 00:13:57.702 "trsvcid": "52456" 00:13:57.702 }, 00:13:57.702 "auth": { 00:13:57.702 "state": "completed", 00:13:57.702 "digest": "sha384", 00:13:57.702 "dhgroup": "ffdhe4096" 00:13:57.702 } 00:13:57.702 } 00:13:57.702 ]' 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.702 05:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.961 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:58.220 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:58.789 05:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.048 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.307 00:13:59.307 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.307 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.307 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.566 { 00:13:59.566 "cntlid": 77, 00:13:59.566 "qid": 0, 00:13:59.566 "state": "enabled", 00:13:59.566 "thread": "nvmf_tgt_poll_group_000", 00:13:59.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:13:59.566 "listen_address": { 00:13:59.566 "trtype": "TCP", 00:13:59.566 "adrfam": "IPv4", 00:13:59.566 "traddr": "10.0.0.3", 00:13:59.566 "trsvcid": "4420" 00:13:59.566 }, 00:13:59.566 "peer_address": { 00:13:59.566 "trtype": "TCP", 00:13:59.566 "adrfam": "IPv4", 00:13:59.566 "traddr": "10.0.0.1", 00:13:59.566 "trsvcid": "52492" 00:13:59.566 }, 00:13:59.566 "auth": { 00:13:59.566 "state": "completed", 00:13:59.566 "digest": "sha384", 00:13:59.566 "dhgroup": "ffdhe4096" 00:13:59.566 } 00:13:59.566 } 00:13:59.566 ]' 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.566 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.826 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:59.826 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.826 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.826 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.826 05:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.085 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:00.085 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:00.653 05:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.221 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.480 00:14:01.480 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.480 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.480 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.739 { 00:14:01.739 "cntlid": 79, 00:14:01.739 "qid": 0, 00:14:01.739 "state": "enabled", 00:14:01.739 "thread": "nvmf_tgt_poll_group_000", 00:14:01.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:01.739 "listen_address": { 00:14:01.739 "trtype": "TCP", 00:14:01.739 "adrfam": "IPv4", 00:14:01.739 "traddr": "10.0.0.3", 00:14:01.739 "trsvcid": "4420" 00:14:01.739 }, 00:14:01.739 "peer_address": { 00:14:01.739 "trtype": "TCP", 00:14:01.739 "adrfam": "IPv4", 00:14:01.739 "traddr": "10.0.0.1", 00:14:01.739 "trsvcid": "52510" 00:14:01.739 }, 00:14:01.739 "auth": { 00:14:01.739 "state": "completed", 00:14:01.739 "digest": "sha384", 00:14:01.739 "dhgroup": "ffdhe4096" 00:14:01.739 } 00:14:01.739 } 00:14:01.739 ]' 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.739 05:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.998 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:01.998 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:02.565 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.565 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:02.565 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.565 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.824 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.824 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.824 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.824 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:02.824 05:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.139 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.413 00:14:03.413 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.413 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.413 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.672 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.672 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.672 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.672 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.672 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.672 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.672 { 00:14:03.672 "cntlid": 81, 00:14:03.672 "qid": 0, 00:14:03.672 "state": "enabled", 00:14:03.672 "thread": "nvmf_tgt_poll_group_000", 00:14:03.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:03.672 "listen_address": { 00:14:03.672 "trtype": "TCP", 00:14:03.672 "adrfam": "IPv4", 00:14:03.672 "traddr": "10.0.0.3", 00:14:03.672 "trsvcid": "4420" 00:14:03.672 }, 00:14:03.672 "peer_address": { 00:14:03.672 "trtype": "TCP", 00:14:03.672 "adrfam": "IPv4", 00:14:03.672 "traddr": "10.0.0.1", 00:14:03.672 "trsvcid": "52532" 00:14:03.672 }, 00:14:03.672 "auth": { 00:14:03.672 "state": "completed", 00:14:03.672 "digest": "sha384", 00:14:03.672 "dhgroup": "ffdhe6144" 00:14:03.672 } 00:14:03.672 } 00:14:03.672 ]' 00:14:03.672 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.930 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.930 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.930 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:03.930 05:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.930 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.930 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.930 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.189 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:04.189 05:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:05.123 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.124 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.689 00:14:05.689 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.689 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.689 05:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.948 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.948 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.948 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.948 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.948 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.948 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.948 { 00:14:05.948 "cntlid": 83, 00:14:05.948 "qid": 0, 00:14:05.948 "state": "enabled", 00:14:05.948 "thread": "nvmf_tgt_poll_group_000", 00:14:05.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:05.948 "listen_address": { 00:14:05.948 "trtype": "TCP", 00:14:05.948 "adrfam": "IPv4", 00:14:05.948 "traddr": "10.0.0.3", 00:14:05.948 "trsvcid": "4420" 00:14:05.948 }, 00:14:05.948 "peer_address": { 00:14:05.948 "trtype": "TCP", 00:14:05.948 "adrfam": "IPv4", 00:14:05.948 "traddr": "10.0.0.1", 00:14:05.948 "trsvcid": "44880" 00:14:05.948 }, 00:14:05.948 "auth": { 00:14:05.948 "state": "completed", 00:14:05.948 "digest": "sha384", 00:14:05.948 "dhgroup": "ffdhe6144" 00:14:05.948 } 00:14:05.948 } 00:14:05.948 ]' 00:14:05.948 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.207 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.207 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.207 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:06.207 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.207 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.207 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.207 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.466 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:06.466 05:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.402 05:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.970 00:14:07.970 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.970 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.970 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.228 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.228 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.228 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.228 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.228 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.228 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.228 { 00:14:08.228 "cntlid": 85, 00:14:08.228 "qid": 0, 00:14:08.228 "state": "enabled", 00:14:08.228 "thread": "nvmf_tgt_poll_group_000", 00:14:08.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:08.228 "listen_address": { 00:14:08.228 "trtype": "TCP", 00:14:08.228 "adrfam": "IPv4", 00:14:08.228 "traddr": "10.0.0.3", 00:14:08.228 "trsvcid": "4420" 00:14:08.228 }, 00:14:08.228 "peer_address": { 00:14:08.228 "trtype": "TCP", 00:14:08.228 "adrfam": "IPv4", 00:14:08.228 "traddr": "10.0.0.1", 00:14:08.228 "trsvcid": "44900" 00:14:08.228 }, 00:14:08.228 "auth": { 00:14:08.228 "state": "completed", 00:14:08.228 "digest": "sha384", 00:14:08.228 "dhgroup": "ffdhe6144" 00:14:08.228 } 00:14:08.228 } 00:14:08.228 ]' 00:14:08.228 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.487 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.487 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.487 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:08.487 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.487 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.487 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.487 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.746 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:08.746 05:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.313 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.880 05:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.138 00:14:10.138 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.138 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.138 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.705 { 00:14:10.705 "cntlid": 87, 00:14:10.705 "qid": 0, 00:14:10.705 "state": "enabled", 00:14:10.705 "thread": "nvmf_tgt_poll_group_000", 00:14:10.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:10.705 "listen_address": { 00:14:10.705 "trtype": "TCP", 00:14:10.705 "adrfam": "IPv4", 00:14:10.705 "traddr": "10.0.0.3", 00:14:10.705 "trsvcid": "4420" 00:14:10.705 }, 00:14:10.705 "peer_address": { 00:14:10.705 "trtype": "TCP", 00:14:10.705 "adrfam": "IPv4", 00:14:10.705 "traddr": "10.0.0.1", 00:14:10.705 "trsvcid": "44928" 00:14:10.705 }, 00:14:10.705 "auth": { 00:14:10.705 "state": "completed", 00:14:10.705 "digest": "sha384", 00:14:10.705 "dhgroup": "ffdhe6144" 00:14:10.705 } 00:14:10.705 } 00:14:10.705 ]' 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.705 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:10.706 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.706 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.706 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.706 05:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.964 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:10.964 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:11.531 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.790 05:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.358 00:14:12.358 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.358 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.358 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.617 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.617 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.617 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.617 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.617 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.617 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.617 { 00:14:12.617 "cntlid": 89, 00:14:12.617 "qid": 0, 00:14:12.617 "state": "enabled", 00:14:12.617 "thread": "nvmf_tgt_poll_group_000", 00:14:12.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:12.617 "listen_address": { 00:14:12.617 "trtype": "TCP", 00:14:12.617 "adrfam": "IPv4", 00:14:12.617 "traddr": "10.0.0.3", 00:14:12.617 "trsvcid": "4420" 00:14:12.617 }, 00:14:12.617 "peer_address": { 00:14:12.617 "trtype": "TCP", 00:14:12.617 "adrfam": "IPv4", 00:14:12.617 "traddr": "10.0.0.1", 00:14:12.617 "trsvcid": "44970" 00:14:12.617 }, 00:14:12.617 "auth": { 00:14:12.617 "state": "completed", 00:14:12.617 "digest": "sha384", 00:14:12.617 "dhgroup": "ffdhe8192" 00:14:12.617 } 00:14:12.617 } 00:14:12.617 ]' 00:14:12.617 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.876 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.876 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.876 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:12.876 05:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.876 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.876 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.876 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.135 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:13.135 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:13.703 05:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.962 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.531 00:14:14.531 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.531 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.531 05:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.098 { 00:14:15.098 "cntlid": 91, 00:14:15.098 "qid": 0, 00:14:15.098 "state": "enabled", 00:14:15.098 "thread": "nvmf_tgt_poll_group_000", 00:14:15.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:15.098 "listen_address": { 00:14:15.098 "trtype": "TCP", 00:14:15.098 "adrfam": "IPv4", 00:14:15.098 "traddr": "10.0.0.3", 00:14:15.098 "trsvcid": "4420" 00:14:15.098 }, 00:14:15.098 "peer_address": { 00:14:15.098 "trtype": "TCP", 00:14:15.098 "adrfam": "IPv4", 00:14:15.098 "traddr": "10.0.0.1", 00:14:15.098 "trsvcid": "45012" 00:14:15.098 }, 00:14:15.098 "auth": { 00:14:15.098 "state": "completed", 00:14:15.098 "digest": "sha384", 00:14:15.098 "dhgroup": "ffdhe8192" 00:14:15.098 } 00:14:15.098 } 00:14:15.098 ]' 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.098 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.356 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:15.356 05:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:15.923 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.182 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:16.182 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.182 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.182 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.182 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.182 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:16.182 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.440 05:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.014 00:14:17.014 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.014 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.014 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.309 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.309 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.309 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.309 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.309 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.309 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.309 { 00:14:17.309 "cntlid": 93, 00:14:17.309 "qid": 0, 00:14:17.309 "state": "enabled", 00:14:17.309 "thread": "nvmf_tgt_poll_group_000", 00:14:17.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:17.309 "listen_address": { 00:14:17.309 "trtype": "TCP", 00:14:17.309 "adrfam": "IPv4", 00:14:17.309 "traddr": "10.0.0.3", 00:14:17.309 "trsvcid": "4420" 00:14:17.309 }, 00:14:17.309 "peer_address": { 00:14:17.309 "trtype": "TCP", 00:14:17.309 "adrfam": "IPv4", 00:14:17.309 "traddr": "10.0.0.1", 00:14:17.309 "trsvcid": "54896" 00:14:17.309 }, 00:14:17.309 "auth": { 00:14:17.309 "state": "completed", 00:14:17.309 "digest": "sha384", 00:14:17.309 "dhgroup": "ffdhe8192" 00:14:17.309 } 00:14:17.309 } 00:14:17.309 ]' 00:14:17.309 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.596 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.596 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.596 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:17.596 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.596 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.596 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.596 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.855 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:17.855 05:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:18.422 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.681 05:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.248 00:14:19.507 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.507 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.507 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.765 { 00:14:19.765 "cntlid": 95, 00:14:19.765 "qid": 0, 00:14:19.765 "state": "enabled", 00:14:19.765 "thread": "nvmf_tgt_poll_group_000", 00:14:19.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:19.765 "listen_address": { 00:14:19.765 "trtype": "TCP", 00:14:19.765 "adrfam": "IPv4", 00:14:19.765 "traddr": "10.0.0.3", 00:14:19.765 "trsvcid": "4420" 00:14:19.765 }, 00:14:19.765 "peer_address": { 00:14:19.765 "trtype": "TCP", 00:14:19.765 "adrfam": "IPv4", 00:14:19.765 "traddr": "10.0.0.1", 00:14:19.765 "trsvcid": "54936" 00:14:19.765 }, 00:14:19.765 "auth": { 00:14:19.765 "state": "completed", 00:14:19.765 "digest": "sha384", 00:14:19.765 "dhgroup": "ffdhe8192" 00:14:19.765 } 00:14:19.765 } 00:14:19.765 ]' 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.765 05:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.024 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:20.024 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:20.960 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:20.961 05:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.961 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.528 00:14:21.528 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.528 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.528 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.787 { 00:14:21.787 "cntlid": 97, 00:14:21.787 "qid": 0, 00:14:21.787 "state": "enabled", 00:14:21.787 "thread": "nvmf_tgt_poll_group_000", 00:14:21.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:21.787 "listen_address": { 00:14:21.787 "trtype": "TCP", 00:14:21.787 "adrfam": "IPv4", 00:14:21.787 "traddr": "10.0.0.3", 00:14:21.787 "trsvcid": "4420" 00:14:21.787 }, 00:14:21.787 "peer_address": { 00:14:21.787 "trtype": "TCP", 00:14:21.787 "adrfam": "IPv4", 00:14:21.787 "traddr": "10.0.0.1", 00:14:21.787 "trsvcid": "54966" 00:14:21.787 }, 00:14:21.787 "auth": { 00:14:21.787 "state": "completed", 00:14:21.787 "digest": "sha512", 00:14:21.787 "dhgroup": "null" 00:14:21.787 } 00:14:21.787 } 00:14:21.787 ]' 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:21.787 05:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.787 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.787 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.787 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.045 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:22.045 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:22.627 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.627 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:22.627 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.627 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.886 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.886 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.886 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:22.886 05:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:23.144 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:23.144 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.144 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:23.144 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:23.144 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.145 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.403 00:14:23.403 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.403 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.403 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.662 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.662 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.662 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.662 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.662 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.662 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.662 { 00:14:23.662 "cntlid": 99, 00:14:23.662 "qid": 0, 00:14:23.662 "state": "enabled", 00:14:23.662 "thread": "nvmf_tgt_poll_group_000", 00:14:23.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:23.662 "listen_address": { 00:14:23.662 "trtype": "TCP", 00:14:23.662 "adrfam": "IPv4", 00:14:23.662 "traddr": "10.0.0.3", 00:14:23.662 "trsvcid": "4420" 00:14:23.662 }, 00:14:23.662 "peer_address": { 00:14:23.662 "trtype": "TCP", 00:14:23.662 "adrfam": "IPv4", 00:14:23.663 "traddr": "10.0.0.1", 00:14:23.663 "trsvcid": "54978" 00:14:23.663 }, 00:14:23.663 "auth": { 00:14:23.663 "state": "completed", 00:14:23.663 "digest": "sha512", 00:14:23.663 "dhgroup": "null" 00:14:23.663 } 00:14:23.663 } 00:14:23.663 ]' 00:14:23.663 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.663 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.663 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.921 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:23.921 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.921 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.921 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.921 05:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.179 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:24.180 05:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.115 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.373 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.373 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.373 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.373 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.632 00:14:25.632 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.632 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.632 05:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.891 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.892 { 00:14:25.892 "cntlid": 101, 00:14:25.892 "qid": 0, 00:14:25.892 "state": "enabled", 00:14:25.892 "thread": "nvmf_tgt_poll_group_000", 00:14:25.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:25.892 "listen_address": { 00:14:25.892 "trtype": "TCP", 00:14:25.892 "adrfam": "IPv4", 00:14:25.892 "traddr": "10.0.0.3", 00:14:25.892 "trsvcid": "4420" 00:14:25.892 }, 00:14:25.892 "peer_address": { 00:14:25.892 "trtype": "TCP", 00:14:25.892 "adrfam": "IPv4", 00:14:25.892 "traddr": "10.0.0.1", 00:14:25.892 "trsvcid": "41204" 00:14:25.892 }, 00:14:25.892 "auth": { 00:14:25.892 "state": "completed", 00:14:25.892 "digest": "sha512", 00:14:25.892 "dhgroup": "null" 00:14:25.892 } 00:14:25.892 } 00:14:25.892 ]' 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.892 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.151 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:26.151 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.151 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.151 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.151 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.412 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:26.412 05:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:26.983 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.550 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.809 00:14:27.809 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.809 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.809 05:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.068 { 00:14:28.068 "cntlid": 103, 00:14:28.068 "qid": 0, 00:14:28.068 "state": "enabled", 00:14:28.068 "thread": "nvmf_tgt_poll_group_000", 00:14:28.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:28.068 "listen_address": { 00:14:28.068 "trtype": "TCP", 00:14:28.068 "adrfam": "IPv4", 00:14:28.068 "traddr": "10.0.0.3", 00:14:28.068 "trsvcid": "4420" 00:14:28.068 }, 00:14:28.068 "peer_address": { 00:14:28.068 "trtype": "TCP", 00:14:28.068 "adrfam": "IPv4", 00:14:28.068 "traddr": "10.0.0.1", 00:14:28.068 "trsvcid": "41218" 00:14:28.068 }, 00:14:28.068 "auth": { 00:14:28.068 "state": "completed", 00:14:28.068 "digest": "sha512", 00:14:28.068 "dhgroup": "null" 00:14:28.068 } 00:14:28.068 } 00:14:28.068 ]' 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.068 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.327 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:28.327 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.327 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.327 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.327 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.586 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:28.586 05:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:29.211 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.470 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.729 00:14:29.729 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.729 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.729 05:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.295 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.295 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.295 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.295 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.295 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.295 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.295 { 00:14:30.295 "cntlid": 105, 00:14:30.295 "qid": 0, 00:14:30.295 "state": "enabled", 00:14:30.295 "thread": "nvmf_tgt_poll_group_000", 00:14:30.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:30.295 "listen_address": { 00:14:30.295 "trtype": "TCP", 00:14:30.295 "adrfam": "IPv4", 00:14:30.295 "traddr": "10.0.0.3", 00:14:30.295 "trsvcid": "4420" 00:14:30.295 }, 00:14:30.295 "peer_address": { 00:14:30.295 "trtype": "TCP", 00:14:30.295 "adrfam": "IPv4", 00:14:30.295 "traddr": "10.0.0.1", 00:14:30.295 "trsvcid": "41242" 00:14:30.295 }, 00:14:30.295 "auth": { 00:14:30.295 "state": "completed", 00:14:30.295 "digest": "sha512", 00:14:30.295 "dhgroup": "ffdhe2048" 00:14:30.295 } 00:14:30.295 } 00:14:30.295 ]' 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.296 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.554 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:30.555 05:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:31.122 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.380 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:31.380 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.380 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.380 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.380 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.380 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:31.380 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.641 05:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.900 00:14:31.900 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.900 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.900 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.159 { 00:14:32.159 "cntlid": 107, 00:14:32.159 "qid": 0, 00:14:32.159 "state": "enabled", 00:14:32.159 "thread": "nvmf_tgt_poll_group_000", 00:14:32.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:32.159 "listen_address": { 00:14:32.159 "trtype": "TCP", 00:14:32.159 "adrfam": "IPv4", 00:14:32.159 "traddr": "10.0.0.3", 00:14:32.159 "trsvcid": "4420" 00:14:32.159 }, 00:14:32.159 "peer_address": { 00:14:32.159 "trtype": "TCP", 00:14:32.159 "adrfam": "IPv4", 00:14:32.159 "traddr": "10.0.0.1", 00:14:32.159 "trsvcid": "41270" 00:14:32.159 }, 00:14:32.159 "auth": { 00:14:32.159 "state": "completed", 00:14:32.159 "digest": "sha512", 00:14:32.159 "dhgroup": "ffdhe2048" 00:14:32.159 } 00:14:32.159 } 00:14:32.159 ]' 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.159 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.418 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:32.418 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.418 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.418 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.418 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.677 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:32.677 05:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:33.244 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.503 05:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.070 00:14:34.070 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.070 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.070 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.329 { 00:14:34.329 "cntlid": 109, 00:14:34.329 "qid": 0, 00:14:34.329 "state": "enabled", 00:14:34.329 "thread": "nvmf_tgt_poll_group_000", 00:14:34.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:34.329 "listen_address": { 00:14:34.329 "trtype": "TCP", 00:14:34.329 "adrfam": "IPv4", 00:14:34.329 "traddr": "10.0.0.3", 00:14:34.329 "trsvcid": "4420" 00:14:34.329 }, 00:14:34.329 "peer_address": { 00:14:34.329 "trtype": "TCP", 00:14:34.329 "adrfam": "IPv4", 00:14:34.329 "traddr": "10.0.0.1", 00:14:34.329 "trsvcid": "41284" 00:14:34.329 }, 00:14:34.329 "auth": { 00:14:34.329 "state": "completed", 00:14:34.329 "digest": "sha512", 00:14:34.329 "dhgroup": "ffdhe2048" 00:14:34.329 } 00:14:34.329 } 00:14:34.329 ]' 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.329 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.588 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:34.588 05:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:35.156 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.414 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:35.414 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.414 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.414 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.414 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.414 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.414 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.673 05:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.932 00:14:35.932 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.932 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.932 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.191 { 00:14:36.191 "cntlid": 111, 00:14:36.191 "qid": 0, 00:14:36.191 "state": "enabled", 00:14:36.191 "thread": "nvmf_tgt_poll_group_000", 00:14:36.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:36.191 "listen_address": { 00:14:36.191 "trtype": "TCP", 00:14:36.191 "adrfam": "IPv4", 00:14:36.191 "traddr": "10.0.0.3", 00:14:36.191 "trsvcid": "4420" 00:14:36.191 }, 00:14:36.191 "peer_address": { 00:14:36.191 "trtype": "TCP", 00:14:36.191 "adrfam": "IPv4", 00:14:36.191 "traddr": "10.0.0.1", 00:14:36.191 "trsvcid": "57414" 00:14:36.191 }, 00:14:36.191 "auth": { 00:14:36.191 "state": "completed", 00:14:36.191 "digest": "sha512", 00:14:36.191 "dhgroup": "ffdhe2048" 00:14:36.191 } 00:14:36.191 } 00:14:36.191 ]' 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.191 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.450 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.450 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.450 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.450 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.450 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.709 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:36.709 05:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:37.277 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.845 05:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.104 00:14:38.104 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.104 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.104 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.363 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.363 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.363 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.363 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.363 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.363 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.363 { 00:14:38.363 "cntlid": 113, 00:14:38.363 "qid": 0, 00:14:38.364 "state": "enabled", 00:14:38.364 "thread": "nvmf_tgt_poll_group_000", 00:14:38.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:38.364 "listen_address": { 00:14:38.364 "trtype": "TCP", 00:14:38.364 "adrfam": "IPv4", 00:14:38.364 "traddr": "10.0.0.3", 00:14:38.364 "trsvcid": "4420" 00:14:38.364 }, 00:14:38.364 "peer_address": { 00:14:38.364 "trtype": "TCP", 00:14:38.364 "adrfam": "IPv4", 00:14:38.364 "traddr": "10.0.0.1", 00:14:38.364 "trsvcid": "57458" 00:14:38.364 }, 00:14:38.364 "auth": { 00:14:38.364 "state": "completed", 00:14:38.364 "digest": "sha512", 00:14:38.364 "dhgroup": "ffdhe3072" 00:14:38.364 } 00:14:38.364 } 00:14:38.364 ]' 00:14:38.364 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.364 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.364 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.364 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:38.364 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.623 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.623 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.623 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.882 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:38.882 05:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:39.449 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.450 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:39.450 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.450 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.450 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.450 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.450 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:39.450 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.721 05:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.036 00:14:40.036 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.036 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.036 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.294 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.295 { 00:14:40.295 "cntlid": 115, 00:14:40.295 "qid": 0, 00:14:40.295 "state": "enabled", 00:14:40.295 "thread": "nvmf_tgt_poll_group_000", 00:14:40.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:40.295 "listen_address": { 00:14:40.295 "trtype": "TCP", 00:14:40.295 "adrfam": "IPv4", 00:14:40.295 "traddr": "10.0.0.3", 00:14:40.295 "trsvcid": "4420" 00:14:40.295 }, 00:14:40.295 "peer_address": { 00:14:40.295 "trtype": "TCP", 00:14:40.295 "adrfam": "IPv4", 00:14:40.295 "traddr": "10.0.0.1", 00:14:40.295 "trsvcid": "57476" 00:14:40.295 }, 00:14:40.295 "auth": { 00:14:40.295 "state": "completed", 00:14:40.295 "digest": "sha512", 00:14:40.295 "dhgroup": "ffdhe3072" 00:14:40.295 } 00:14:40.295 } 00:14:40.295 ]' 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.295 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.553 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:40.553 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.553 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.553 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.553 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.813 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:40.813 05:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:41.381 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.640 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:41.640 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.640 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.640 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.640 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.640 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:41.640 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.899 05:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.158 00:14:42.158 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.158 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.158 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.417 { 00:14:42.417 "cntlid": 117, 00:14:42.417 "qid": 0, 00:14:42.417 "state": "enabled", 00:14:42.417 "thread": "nvmf_tgt_poll_group_000", 00:14:42.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:42.417 "listen_address": { 00:14:42.417 "trtype": "TCP", 00:14:42.417 "adrfam": "IPv4", 00:14:42.417 "traddr": "10.0.0.3", 00:14:42.417 "trsvcid": "4420" 00:14:42.417 }, 00:14:42.417 "peer_address": { 00:14:42.417 "trtype": "TCP", 00:14:42.417 "adrfam": "IPv4", 00:14:42.417 "traddr": "10.0.0.1", 00:14:42.417 "trsvcid": "57500" 00:14:42.417 }, 00:14:42.417 "auth": { 00:14:42.417 "state": "completed", 00:14:42.417 "digest": "sha512", 00:14:42.417 "dhgroup": "ffdhe3072" 00:14:42.417 } 00:14:42.417 } 00:14:42.417 ]' 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.417 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.676 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.676 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.676 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.676 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.676 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.676 05:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.934 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:42.935 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.502 05:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.069 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.328 00:14:44.328 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.328 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.328 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.587 { 00:14:44.587 "cntlid": 119, 00:14:44.587 "qid": 0, 00:14:44.587 "state": "enabled", 00:14:44.587 "thread": "nvmf_tgt_poll_group_000", 00:14:44.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:44.587 "listen_address": { 00:14:44.587 "trtype": "TCP", 00:14:44.587 "adrfam": "IPv4", 00:14:44.587 "traddr": "10.0.0.3", 00:14:44.587 "trsvcid": "4420" 00:14:44.587 }, 00:14:44.587 "peer_address": { 00:14:44.587 "trtype": "TCP", 00:14:44.587 "adrfam": "IPv4", 00:14:44.587 "traddr": "10.0.0.1", 00:14:44.587 "trsvcid": "57530" 00:14:44.587 }, 00:14:44.587 "auth": { 00:14:44.587 "state": "completed", 00:14:44.587 "digest": "sha512", 00:14:44.587 "dhgroup": "ffdhe3072" 00:14:44.587 } 00:14:44.587 } 00:14:44.587 ]' 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.587 05:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.846 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:44.846 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:45.414 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.673 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.674 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.674 05:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.932 00:14:46.192 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.192 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.192 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.451 { 00:14:46.451 "cntlid": 121, 00:14:46.451 "qid": 0, 00:14:46.451 "state": "enabled", 00:14:46.451 "thread": "nvmf_tgt_poll_group_000", 00:14:46.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:46.451 "listen_address": { 00:14:46.451 "trtype": "TCP", 00:14:46.451 "adrfam": "IPv4", 00:14:46.451 "traddr": "10.0.0.3", 00:14:46.451 "trsvcid": "4420" 00:14:46.451 }, 00:14:46.451 "peer_address": { 00:14:46.451 "trtype": "TCP", 00:14:46.451 "adrfam": "IPv4", 00:14:46.451 "traddr": "10.0.0.1", 00:14:46.451 "trsvcid": "34028" 00:14:46.451 }, 00:14:46.451 "auth": { 00:14:46.451 "state": "completed", 00:14:46.451 "digest": "sha512", 00:14:46.451 "dhgroup": "ffdhe4096" 00:14:46.451 } 00:14:46.451 } 00:14:46.451 ]' 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.451 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.710 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:46.710 05:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:47.278 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.847 05:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.106 00:14:48.106 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.106 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.106 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.365 { 00:14:48.365 "cntlid": 123, 00:14:48.365 "qid": 0, 00:14:48.365 "state": "enabled", 00:14:48.365 "thread": "nvmf_tgt_poll_group_000", 00:14:48.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:48.365 "listen_address": { 00:14:48.365 "trtype": "TCP", 00:14:48.365 "adrfam": "IPv4", 00:14:48.365 "traddr": "10.0.0.3", 00:14:48.365 "trsvcid": "4420" 00:14:48.365 }, 00:14:48.365 "peer_address": { 00:14:48.365 "trtype": "TCP", 00:14:48.365 "adrfam": "IPv4", 00:14:48.365 "traddr": "10.0.0.1", 00:14:48.365 "trsvcid": "34064" 00:14:48.365 }, 00:14:48.365 "auth": { 00:14:48.365 "state": "completed", 00:14:48.365 "digest": "sha512", 00:14:48.365 "dhgroup": "ffdhe4096" 00:14:48.365 } 00:14:48.365 } 00:14:48.365 ]' 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.365 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.625 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.625 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.625 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.884 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:48.884 05:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:49.452 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.714 05:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.287 00:14:50.287 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.287 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.287 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.600 { 00:14:50.600 "cntlid": 125, 00:14:50.600 "qid": 0, 00:14:50.600 "state": "enabled", 00:14:50.600 "thread": "nvmf_tgt_poll_group_000", 00:14:50.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:50.600 "listen_address": { 00:14:50.600 "trtype": "TCP", 00:14:50.600 "adrfam": "IPv4", 00:14:50.600 "traddr": "10.0.0.3", 00:14:50.600 "trsvcid": "4420" 00:14:50.600 }, 00:14:50.600 "peer_address": { 00:14:50.600 "trtype": "TCP", 00:14:50.600 "adrfam": "IPv4", 00:14:50.600 "traddr": "10.0.0.1", 00:14:50.600 "trsvcid": "34088" 00:14:50.600 }, 00:14:50.600 "auth": { 00:14:50.600 "state": "completed", 00:14:50.600 "digest": "sha512", 00:14:50.600 "dhgroup": "ffdhe4096" 00:14:50.600 } 00:14:50.600 } 00:14:50.600 ]' 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.600 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.601 05:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.872 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:50.872 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:51.439 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.439 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:51.440 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.440 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.440 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.440 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.440 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.440 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.698 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:51.698 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.698 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.698 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:51.698 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.699 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.699 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:14:51.699 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.699 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.957 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.957 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.957 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.957 05:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.215 00:14:52.215 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.215 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.215 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.474 { 00:14:52.474 "cntlid": 127, 00:14:52.474 "qid": 0, 00:14:52.474 "state": "enabled", 00:14:52.474 "thread": "nvmf_tgt_poll_group_000", 00:14:52.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:52.474 "listen_address": { 00:14:52.474 "trtype": "TCP", 00:14:52.474 "adrfam": "IPv4", 00:14:52.474 "traddr": "10.0.0.3", 00:14:52.474 "trsvcid": "4420" 00:14:52.474 }, 00:14:52.474 "peer_address": { 00:14:52.474 "trtype": "TCP", 00:14:52.474 "adrfam": "IPv4", 00:14:52.474 "traddr": "10.0.0.1", 00:14:52.474 "trsvcid": "34126" 00:14:52.474 }, 00:14:52.474 "auth": { 00:14:52.474 "state": "completed", 00:14:52.474 "digest": "sha512", 00:14:52.474 "dhgroup": "ffdhe4096" 00:14:52.474 } 00:14:52.474 } 00:14:52.474 ]' 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.474 05:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.042 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:53.042 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:53.609 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.868 05:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.437 00:14:54.437 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.437 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.437 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.696 { 00:14:54.696 "cntlid": 129, 00:14:54.696 "qid": 0, 00:14:54.696 "state": "enabled", 00:14:54.696 "thread": "nvmf_tgt_poll_group_000", 00:14:54.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:54.696 "listen_address": { 00:14:54.696 "trtype": "TCP", 00:14:54.696 "adrfam": "IPv4", 00:14:54.696 "traddr": "10.0.0.3", 00:14:54.696 "trsvcid": "4420" 00:14:54.696 }, 00:14:54.696 "peer_address": { 00:14:54.696 "trtype": "TCP", 00:14:54.696 "adrfam": "IPv4", 00:14:54.696 "traddr": "10.0.0.1", 00:14:54.696 "trsvcid": "34136" 00:14:54.696 }, 00:14:54.696 "auth": { 00:14:54.696 "state": "completed", 00:14:54.696 "digest": "sha512", 00:14:54.696 "dhgroup": "ffdhe6144" 00:14:54.696 } 00:14:54.696 } 00:14:54.696 ]' 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.696 05:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.955 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:54.955 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:55.890 05:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.890 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.456 00:14:56.457 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.457 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.457 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.714 { 00:14:56.714 "cntlid": 131, 00:14:56.714 "qid": 0, 00:14:56.714 "state": "enabled", 00:14:56.714 "thread": "nvmf_tgt_poll_group_000", 00:14:56.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:56.714 "listen_address": { 00:14:56.714 "trtype": "TCP", 00:14:56.714 "adrfam": "IPv4", 00:14:56.714 "traddr": "10.0.0.3", 00:14:56.714 "trsvcid": "4420" 00:14:56.714 }, 00:14:56.714 "peer_address": { 00:14:56.714 "trtype": "TCP", 00:14:56.714 "adrfam": "IPv4", 00:14:56.714 "traddr": "10.0.0.1", 00:14:56.714 "trsvcid": "55186" 00:14:56.714 }, 00:14:56.714 "auth": { 00:14:56.714 "state": "completed", 00:14:56.714 "digest": "sha512", 00:14:56.714 "dhgroup": "ffdhe6144" 00:14:56.714 } 00:14:56.714 } 00:14:56.714 ]' 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.714 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.973 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.973 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.973 05:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.231 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:57.231 05:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:57.798 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.367 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.626 00:14:58.626 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.626 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.626 05:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.884 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.885 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.885 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.885 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.885 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.885 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.885 { 00:14:58.885 "cntlid": 133, 00:14:58.885 "qid": 0, 00:14:58.885 "state": "enabled", 00:14:58.885 "thread": "nvmf_tgt_poll_group_000", 00:14:58.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:14:58.885 "listen_address": { 00:14:58.885 "trtype": "TCP", 00:14:58.885 "adrfam": "IPv4", 00:14:58.885 "traddr": "10.0.0.3", 00:14:58.885 "trsvcid": "4420" 00:14:58.885 }, 00:14:58.885 "peer_address": { 00:14:58.885 "trtype": "TCP", 00:14:58.885 "adrfam": "IPv4", 00:14:58.885 "traddr": "10.0.0.1", 00:14:58.885 "trsvcid": "55214" 00:14:58.885 }, 00:14:58.885 "auth": { 00:14:58.885 "state": "completed", 00:14:58.885 "digest": "sha512", 00:14:58.885 "dhgroup": "ffdhe6144" 00:14:58.885 } 00:14:58.885 } 00:14:58.885 ]' 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.144 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.402 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:14:59.402 05:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.339 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.908 00:15:00.908 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.908 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.908 05:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.167 { 00:15:01.167 "cntlid": 135, 00:15:01.167 "qid": 0, 00:15:01.167 "state": "enabled", 00:15:01.167 "thread": "nvmf_tgt_poll_group_000", 00:15:01.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:01.167 "listen_address": { 00:15:01.167 "trtype": "TCP", 00:15:01.167 "adrfam": "IPv4", 00:15:01.167 "traddr": "10.0.0.3", 00:15:01.167 "trsvcid": "4420" 00:15:01.167 }, 00:15:01.167 "peer_address": { 00:15:01.167 "trtype": "TCP", 00:15:01.167 "adrfam": "IPv4", 00:15:01.167 "traddr": "10.0.0.1", 00:15:01.167 "trsvcid": "55242" 00:15:01.167 }, 00:15:01.167 "auth": { 00:15:01.167 "state": "completed", 00:15:01.167 "digest": "sha512", 00:15:01.167 "dhgroup": "ffdhe6144" 00:15:01.167 } 00:15:01.167 } 00:15:01.167 ]' 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.167 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.426 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.426 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.426 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.685 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:01.685 05:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:02.296 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.555 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.123 00:15:03.382 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.382 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.382 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.641 { 00:15:03.641 "cntlid": 137, 00:15:03.641 "qid": 0, 00:15:03.641 "state": "enabled", 00:15:03.641 "thread": "nvmf_tgt_poll_group_000", 00:15:03.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:03.641 "listen_address": { 00:15:03.641 "trtype": "TCP", 00:15:03.641 "adrfam": "IPv4", 00:15:03.641 "traddr": "10.0.0.3", 00:15:03.641 "trsvcid": "4420" 00:15:03.641 }, 00:15:03.641 "peer_address": { 00:15:03.641 "trtype": "TCP", 00:15:03.641 "adrfam": "IPv4", 00:15:03.641 "traddr": "10.0.0.1", 00:15:03.641 "trsvcid": "55252" 00:15:03.641 }, 00:15:03.641 "auth": { 00:15:03.641 "state": "completed", 00:15:03.641 "digest": "sha512", 00:15:03.641 "dhgroup": "ffdhe8192" 00:15:03.641 } 00:15:03.641 } 00:15:03.641 ]' 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.641 05:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.900 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:15:03.900 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:04.836 05:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.836 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.768 00:15:05.768 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.768 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.768 05:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.768 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.768 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.768 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.768 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.027 { 00:15:06.027 "cntlid": 139, 00:15:06.027 "qid": 0, 00:15:06.027 "state": "enabled", 00:15:06.027 "thread": "nvmf_tgt_poll_group_000", 00:15:06.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:06.027 "listen_address": { 00:15:06.027 "trtype": "TCP", 00:15:06.027 "adrfam": "IPv4", 00:15:06.027 "traddr": "10.0.0.3", 00:15:06.027 "trsvcid": "4420" 00:15:06.027 }, 00:15:06.027 "peer_address": { 00:15:06.027 "trtype": "TCP", 00:15:06.027 "adrfam": "IPv4", 00:15:06.027 "traddr": "10.0.0.1", 00:15:06.027 "trsvcid": "44408" 00:15:06.027 }, 00:15:06.027 "auth": { 00:15:06.027 "state": "completed", 00:15:06.027 "digest": "sha512", 00:15:06.027 "dhgroup": "ffdhe8192" 00:15:06.027 } 00:15:06.027 } 00:15:06.027 ]' 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.027 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.028 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.028 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.287 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:15:06.287 05:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: --dhchap-ctrl-secret DHHC-1:02:OTIyY2VlMDdjN2Q2NzNkNDY4ZDYxMzFiMWRhODU4MjEwNGExOWJmMDI4MzkwY2RjO5uyaA==: 00:15:07.223 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.223 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:07.223 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.223 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.223 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.223 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.223 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.224 05:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.159 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.159 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.159 { 00:15:08.159 "cntlid": 141, 00:15:08.159 "qid": 0, 00:15:08.159 "state": "enabled", 00:15:08.159 "thread": "nvmf_tgt_poll_group_000", 00:15:08.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:08.159 "listen_address": { 00:15:08.159 "trtype": "TCP", 00:15:08.159 "adrfam": "IPv4", 00:15:08.159 "traddr": "10.0.0.3", 00:15:08.159 "trsvcid": "4420" 00:15:08.159 }, 00:15:08.159 "peer_address": { 00:15:08.159 "trtype": "TCP", 00:15:08.159 "adrfam": "IPv4", 00:15:08.159 "traddr": "10.0.0.1", 00:15:08.159 "trsvcid": "44434" 00:15:08.159 }, 00:15:08.159 "auth": { 00:15:08.159 "state": "completed", 00:15:08.160 "digest": "sha512", 00:15:08.160 "dhgroup": "ffdhe8192" 00:15:08.160 } 00:15:08.160 } 00:15:08.160 ]' 00:15:08.160 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.418 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.418 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.418 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.418 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.418 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.418 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.418 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.677 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:15:08.677 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:01:NTM5MDFhNDlhZDE0YjkwZjdhMTRjMjRjMmZlN2NkYTUsr0W5: 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.612 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.613 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.179 00:15:10.438 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.438 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.438 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.696 { 00:15:10.696 "cntlid": 143, 00:15:10.696 "qid": 0, 00:15:10.696 "state": "enabled", 00:15:10.696 "thread": "nvmf_tgt_poll_group_000", 00:15:10.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:10.696 "listen_address": { 00:15:10.696 "trtype": "TCP", 00:15:10.696 "adrfam": "IPv4", 00:15:10.696 "traddr": "10.0.0.3", 00:15:10.696 "trsvcid": "4420" 00:15:10.696 }, 00:15:10.696 "peer_address": { 00:15:10.696 "trtype": "TCP", 00:15:10.696 "adrfam": "IPv4", 00:15:10.696 "traddr": "10.0.0.1", 00:15:10.696 "trsvcid": "44446" 00:15:10.696 }, 00:15:10.696 "auth": { 00:15:10.696 "state": "completed", 00:15:10.696 "digest": "sha512", 00:15:10.696 "dhgroup": "ffdhe8192" 00:15:10.696 } 00:15:10.696 } 00:15:10.696 ]' 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.696 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.697 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.697 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:10.697 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.697 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.697 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.697 05:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.263 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:11.263 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:11.830 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.830 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:11.830 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.830 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.830 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.830 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:11.830 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:11.831 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:11.831 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.831 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.831 05:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.089 05:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.026 00:15:13.026 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.026 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.026 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.285 { 00:15:13.285 "cntlid": 145, 00:15:13.285 "qid": 0, 00:15:13.285 "state": "enabled", 00:15:13.285 "thread": "nvmf_tgt_poll_group_000", 00:15:13.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:13.285 "listen_address": { 00:15:13.285 "trtype": "TCP", 00:15:13.285 "adrfam": "IPv4", 00:15:13.285 "traddr": "10.0.0.3", 00:15:13.285 "trsvcid": "4420" 00:15:13.285 }, 00:15:13.285 "peer_address": { 00:15:13.285 "trtype": "TCP", 00:15:13.285 "adrfam": "IPv4", 00:15:13.285 "traddr": "10.0.0.1", 00:15:13.285 "trsvcid": "44464" 00:15:13.285 }, 00:15:13.285 "auth": { 00:15:13.285 "state": "completed", 00:15:13.285 "digest": "sha512", 00:15:13.285 "dhgroup": "ffdhe8192" 00:15:13.285 } 00:15:13.285 } 00:15:13.285 ]' 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.285 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.551 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.551 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.551 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.810 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:15:13.810 05:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:00:MWU1YzQyYWFkYjUwNjE0NThlMWExNzI1M2QzODU3OTk1N2EwZTkwNjc4ZTU0ZmMy48o/FA==: --dhchap-ctrl-secret DHHC-1:03:OTRiOTQ4ZTg1YzdkNGYyMjYwZDRiN2ZiNDVkNzY4YmExMjg3NzAzYTk1YzExMjIxMzI3NTk2MjgxNjVkNGE2YWyw+4U=: 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.745 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:14.746 05:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:15.311 request: 00:15:15.311 { 00:15:15.311 "name": "nvme0", 00:15:15.311 "trtype": "tcp", 00:15:15.311 "traddr": "10.0.0.3", 00:15:15.311 "adrfam": "ipv4", 00:15:15.311 "trsvcid": "4420", 00:15:15.311 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:15.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:15.311 "prchk_reftag": false, 00:15:15.311 "prchk_guard": false, 00:15:15.311 "hdgst": false, 00:15:15.311 "ddgst": false, 00:15:15.311 "dhchap_key": "key2", 00:15:15.311 "allow_unrecognized_csi": false, 00:15:15.311 "method": "bdev_nvme_attach_controller", 00:15:15.311 "req_id": 1 00:15:15.311 } 00:15:15.311 Got JSON-RPC error response 00:15:15.311 response: 00:15:15.311 { 00:15:15.311 "code": -5, 00:15:15.311 "message": "Input/output error" 00:15:15.311 } 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:15.311 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:15.312 05:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:15.878 request: 00:15:15.878 { 00:15:15.878 "name": "nvme0", 00:15:15.878 "trtype": "tcp", 00:15:15.878 "traddr": "10.0.0.3", 00:15:15.878 "adrfam": "ipv4", 00:15:15.878 "trsvcid": "4420", 00:15:15.878 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:15.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:15.878 "prchk_reftag": false, 00:15:15.878 "prchk_guard": false, 00:15:15.878 "hdgst": false, 00:15:15.878 "ddgst": false, 00:15:15.878 "dhchap_key": "key1", 00:15:15.878 "dhchap_ctrlr_key": "ckey2", 00:15:15.878 "allow_unrecognized_csi": false, 00:15:15.878 "method": "bdev_nvme_attach_controller", 00:15:15.878 "req_id": 1 00:15:15.878 } 00:15:15.878 Got JSON-RPC error response 00:15:15.878 response: 00:15:15.878 { 00:15:15.878 "code": -5, 00:15:15.878 "message": "Input/output error" 00:15:15.878 } 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.878 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.446 request: 00:15:16.446 { 00:15:16.446 "name": "nvme0", 00:15:16.446 "trtype": "tcp", 00:15:16.446 "traddr": "10.0.0.3", 00:15:16.446 "adrfam": "ipv4", 00:15:16.446 "trsvcid": "4420", 00:15:16.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:16.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:16.446 "prchk_reftag": false, 00:15:16.446 "prchk_guard": false, 00:15:16.446 "hdgst": false, 00:15:16.446 "ddgst": false, 00:15:16.446 "dhchap_key": "key1", 00:15:16.446 "dhchap_ctrlr_key": "ckey1", 00:15:16.446 "allow_unrecognized_csi": false, 00:15:16.446 "method": "bdev_nvme_attach_controller", 00:15:16.446 "req_id": 1 00:15:16.446 } 00:15:16.446 Got JSON-RPC error response 00:15:16.446 response: 00:15:16.446 { 00:15:16.446 "code": -5, 00:15:16.446 "message": "Input/output error" 00:15:16.446 } 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 71669 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 71669 ']' 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 71669 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.446 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71669 00:15:16.447 killing process with pid 71669 00:15:16.447 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.447 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.447 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71669' 00:15:16.447 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 71669 00:15:16.447 05:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 71669 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=74735 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 74735 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 74735 ']' 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.384 05:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:18.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 74735 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 74735 ']' 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.761 05:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.020 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.020 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:19.020 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:19.020 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.020 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 null0 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UYO 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.w1E ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.w1E 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0a6 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.nW5 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nW5 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.u4V 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.RgB ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RgB 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LwZ 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.280 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.657 nvme0n1 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.657 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.657 { 00:15:20.657 "cntlid": 1, 00:15:20.657 "qid": 0, 00:15:20.657 "state": "enabled", 00:15:20.657 "thread": "nvmf_tgt_poll_group_000", 00:15:20.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:20.657 "listen_address": { 00:15:20.657 "trtype": "TCP", 00:15:20.657 "adrfam": "IPv4", 00:15:20.657 "traddr": "10.0.0.3", 00:15:20.657 "trsvcid": "4420" 00:15:20.657 }, 00:15:20.657 "peer_address": { 00:15:20.657 "trtype": "TCP", 00:15:20.657 "adrfam": "IPv4", 00:15:20.657 "traddr": "10.0.0.1", 00:15:20.657 "trsvcid": "37584" 00:15:20.657 }, 00:15:20.657 "auth": { 00:15:20.657 "state": "completed", 00:15:20.657 "digest": "sha512", 00:15:20.657 "dhgroup": "ffdhe8192" 00:15:20.657 } 00:15:20.657 } 00:15:20.658 ]' 00:15:20.658 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.658 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.658 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.918 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.918 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.918 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.918 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.918 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.178 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:21.178 05:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key3 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:22.115 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.374 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.633 request: 00:15:22.633 { 00:15:22.633 "name": "nvme0", 00:15:22.633 "trtype": "tcp", 00:15:22.633 "traddr": "10.0.0.3", 00:15:22.633 "adrfam": "ipv4", 00:15:22.633 "trsvcid": "4420", 00:15:22.633 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:22.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:22.633 "prchk_reftag": false, 00:15:22.633 "prchk_guard": false, 00:15:22.633 "hdgst": false, 00:15:22.633 "ddgst": false, 00:15:22.633 "dhchap_key": "key3", 00:15:22.633 "allow_unrecognized_csi": false, 00:15:22.633 "method": "bdev_nvme_attach_controller", 00:15:22.633 "req_id": 1 00:15:22.633 } 00:15:22.633 Got JSON-RPC error response 00:15:22.633 response: 00:15:22.633 { 00:15:22.633 "code": -5, 00:15:22.633 "message": "Input/output error" 00:15:22.633 } 00:15:22.633 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:22.633 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.633 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.633 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.633 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:22.633 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:22.634 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:22.634 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.892 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.893 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.152 request: 00:15:23.152 { 00:15:23.152 "name": "nvme0", 00:15:23.152 "trtype": "tcp", 00:15:23.152 "traddr": "10.0.0.3", 00:15:23.152 "adrfam": "ipv4", 00:15:23.152 "trsvcid": "4420", 00:15:23.152 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:23.152 "prchk_reftag": false, 00:15:23.152 "prchk_guard": false, 00:15:23.152 "hdgst": false, 00:15:23.152 "ddgst": false, 00:15:23.152 "dhchap_key": "key3", 00:15:23.152 "allow_unrecognized_csi": false, 00:15:23.152 "method": "bdev_nvme_attach_controller", 00:15:23.152 "req_id": 1 00:15:23.152 } 00:15:23.152 Got JSON-RPC error response 00:15:23.152 response: 00:15:23.152 { 00:15:23.152 "code": -5, 00:15:23.152 "message": "Input/output error" 00:15:23.152 } 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.152 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.411 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:23.411 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.411 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.671 05:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.930 request: 00:15:23.930 { 00:15:23.930 "name": "nvme0", 00:15:23.930 "trtype": "tcp", 00:15:23.930 "traddr": "10.0.0.3", 00:15:23.930 "adrfam": "ipv4", 00:15:23.930 "trsvcid": "4420", 00:15:23.930 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:23.930 "prchk_reftag": false, 00:15:23.930 "prchk_guard": false, 00:15:23.930 "hdgst": false, 00:15:23.930 "ddgst": false, 00:15:23.930 "dhchap_key": "key0", 00:15:23.930 "dhchap_ctrlr_key": "key1", 00:15:23.930 "allow_unrecognized_csi": false, 00:15:23.930 "method": "bdev_nvme_attach_controller", 00:15:23.930 "req_id": 1 00:15:23.930 } 00:15:23.930 Got JSON-RPC error response 00:15:23.930 response: 00:15:23.930 { 00:15:23.930 "code": -5, 00:15:23.930 "message": "Input/output error" 00:15:23.930 } 00:15:23.930 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:23.930 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:23.930 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.930 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.930 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:23.930 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:23.930 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:24.189 nvme0n1 00:15:24.448 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:24.448 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.448 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:24.707 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.707 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.707 05:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.965 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 00:15:24.965 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.965 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.965 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.965 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:24.965 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:24.965 05:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:26.392 nvme0n1 00:15:26.392 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:26.392 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.392 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:26.651 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.910 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.910 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:26.910 05:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid ccafdfa8-c1c5-4fda-89cf-286fc282eeec -l 0 --dhchap-secret DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: --dhchap-ctrl-secret DHHC-1:03:YmVjZDk4ODg0Zjk5NGJlZDk0OTdlZjAzMzNiYmQ2YmQ1NWQ3NTY4YzAxN2E3OWU3MjdhYzc0MjJmN2ZjZWMwNqfvgio=: 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.479 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.048 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:28.048 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:28.048 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:28.048 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:28.048 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.048 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:28.048 05:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.048 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:28.048 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:28.048 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:28.616 request: 00:15:28.616 { 00:15:28.616 "name": "nvme0", 00:15:28.616 "trtype": "tcp", 00:15:28.616 "traddr": "10.0.0.3", 00:15:28.616 "adrfam": "ipv4", 00:15:28.616 "trsvcid": "4420", 00:15:28.616 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec", 00:15:28.616 "prchk_reftag": false, 00:15:28.616 "prchk_guard": false, 00:15:28.616 "hdgst": false, 00:15:28.616 "ddgst": false, 00:15:28.616 "dhchap_key": "key1", 00:15:28.616 "allow_unrecognized_csi": false, 00:15:28.616 "method": "bdev_nvme_attach_controller", 00:15:28.616 "req_id": 1 00:15:28.616 } 00:15:28.616 Got JSON-RPC error response 00:15:28.616 response: 00:15:28.616 { 00:15:28.616 "code": -5, 00:15:28.616 "message": "Input/output error" 00:15:28.616 } 00:15:28.616 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:28.616 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.616 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.616 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.616 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:28.616 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:28.616 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:29.552 nvme0n1 00:15:29.552 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:29.552 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:29.552 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.811 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.811 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.811 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.070 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:30.070 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.070 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.070 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.070 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:30.070 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:30.070 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:30.329 nvme0n1 00:15:30.329 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:30.329 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.329 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:30.896 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.896 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.896 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: '' 2s 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: ]] 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmVhMWJlZmE5ODI0NWExOTdmMWJhZjBmNGNmOGM2NDdhrq54: 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:30.896 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: 2s 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: ]] 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZGY1ZTJlYjIxZTBkOTc5Yzg1MTRkZDZiMjIyNzFjMWEwN2VmMGM0Yjc0MjY2N2Jj99C+GA==: 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:33.429 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:35.333 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:36.270 nvme0n1 00:15:36.270 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:36.270 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.270 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.270 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.270 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:36.270 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:36.838 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:36.838 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:36.838 05:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:37.098 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:37.666 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:38.256 request: 00:15:38.256 { 00:15:38.256 "name": "nvme0", 00:15:38.256 "dhchap_key": "key1", 00:15:38.256 "dhchap_ctrlr_key": "key3", 00:15:38.256 "method": "bdev_nvme_set_keys", 00:15:38.256 "req_id": 1 00:15:38.256 } 00:15:38.256 Got JSON-RPC error response 00:15:38.257 response: 00:15:38.257 { 00:15:38.257 "code": -13, 00:15:38.257 "message": "Permission denied" 00:15:38.257 } 00:15:38.257 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:38.257 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:38.257 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:38.257 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:38.257 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:38.257 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.257 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:38.515 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:38.515 05:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:39.452 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:39.452 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:39.452 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.711 05:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.093 nvme0n1 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:41.093 05:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:41.356 request: 00:15:41.356 { 00:15:41.356 "name": "nvme0", 00:15:41.356 "dhchap_key": "key2", 00:15:41.356 "dhchap_ctrlr_key": "key0", 00:15:41.356 "method": "bdev_nvme_set_keys", 00:15:41.356 "req_id": 1 00:15:41.356 } 00:15:41.356 Got JSON-RPC error response 00:15:41.356 response: 00:15:41.356 { 00:15:41.356 "code": -13, 00:15:41.356 "message": "Permission denied" 00:15:41.356 } 00:15:41.356 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:41.356 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.356 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.356 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.356 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:41.356 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:41.356 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.614 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:41.614 05:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:42.993 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:42.993 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:42.993 05:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 71701 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 71701 ']' 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 71701 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71701 00:15:42.993 killing process with pid 71701 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71701' 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 71701 00:15:42.993 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 71701 00:15:44.898 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:44.898 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:44.898 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:44.898 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:44.898 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:44.898 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:44.898 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:44.898 rmmod nvme_tcp 00:15:44.898 rmmod nvme_fabrics 00:15:45.157 rmmod nvme_keyring 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 74735 ']' 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 74735 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 74735 ']' 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 74735 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74735 00:15:45.157 killing process with pid 74735 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74735' 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 74735 00:15:45.157 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 74735 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:46.093 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UYO /tmp/spdk.key-sha256.0a6 /tmp/spdk.key-sha384.u4V /tmp/spdk.key-sha512.LwZ /tmp/spdk.key-sha512.w1E /tmp/spdk.key-sha384.nW5 /tmp/spdk.key-sha256.RgB '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:46.353 00:15:46.353 real 3m15.019s 00:15:46.353 user 7m45.379s 00:15:46.353 sys 0m28.236s 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.353 ************************************ 00:15:46.353 END TEST nvmf_auth_target 00:15:46.353 ************************************ 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.353 ************************************ 00:15:46.353 START TEST nvmf_bdevio_no_huge 00:15:46.353 ************************************ 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:46.353 * Looking for test storage... 00:15:46.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.353 --rc genhtml_branch_coverage=1 00:15:46.353 --rc genhtml_function_coverage=1 00:15:46.353 --rc genhtml_legend=1 00:15:46.353 --rc geninfo_all_blocks=1 00:15:46.353 --rc geninfo_unexecuted_blocks=1 00:15:46.353 00:15:46.353 ' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.353 --rc genhtml_branch_coverage=1 00:15:46.353 --rc genhtml_function_coverage=1 00:15:46.353 --rc genhtml_legend=1 00:15:46.353 --rc geninfo_all_blocks=1 00:15:46.353 --rc geninfo_unexecuted_blocks=1 00:15:46.353 00:15:46.353 ' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.353 --rc genhtml_branch_coverage=1 00:15:46.353 --rc genhtml_function_coverage=1 00:15:46.353 --rc genhtml_legend=1 00:15:46.353 --rc geninfo_all_blocks=1 00:15:46.353 --rc geninfo_unexecuted_blocks=1 00:15:46.353 00:15:46.353 ' 00:15:46.353 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:46.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.353 --rc genhtml_branch_coverage=1 00:15:46.353 --rc genhtml_function_coverage=1 00:15:46.353 --rc genhtml_legend=1 00:15:46.353 --rc geninfo_all_blocks=1 00:15:46.353 --rc geninfo_unexecuted_blocks=1 00:15:46.353 00:15:46.353 ' 00:15:46.354 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.354 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.614 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:46.614 Cannot find device "nvmf_init_br" 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:46.614 Cannot find device "nvmf_init_br2" 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:46.614 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:46.614 Cannot find device "nvmf_tgt_br" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.615 Cannot find device "nvmf_tgt_br2" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:46.615 Cannot find device "nvmf_init_br" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:46.615 Cannot find device "nvmf_init_br2" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:46.615 Cannot find device "nvmf_tgt_br" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:46.615 Cannot find device "nvmf_tgt_br2" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:46.615 Cannot find device "nvmf_br" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:46.615 Cannot find device "nvmf_init_if" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:46.615 Cannot find device "nvmf_init_if2" 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.615 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:46.874 05:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:46.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:46.874 00:15:46.874 --- 10.0.0.3 ping statistics --- 00:15:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.874 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:46.874 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:46.874 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:15:46.874 00:15:46.874 --- 10.0.0.4 ping statistics --- 00:15:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.874 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:46.874 00:15:46.874 --- 10.0.0.1 ping statistics --- 00:15:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.874 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:46.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:46.874 00:15:46.874 --- 10.0.0.2 ping statistics --- 00:15:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.874 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=75409 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 75409 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 75409 ']' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.874 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.133 [2024-12-16 05:33:27.230142] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:47.133 [2024-12-16 05:33:27.230912] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:47.392 [2024-12-16 05:33:27.448968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.392 [2024-12-16 05:33:27.590350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.392 [2024-12-16 05:33:27.590479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.392 [2024-12-16 05:33:27.590511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.392 [2024-12-16 05:33:27.590524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.392 [2024-12-16 05:33:27.590534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.392 [2024-12-16 05:33:27.592278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:15:47.392 [2024-12-16 05:33:27.592424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:15:47.392 [2024-12-16 05:33:27.592569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:15:47.392 [2024-12-16 05:33:27.592959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.651 [2024-12-16 05:33:27.751955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:48.219 [2024-12-16 05:33:28.243124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:48.219 Malloc0 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:48.219 [2024-12-16 05:33:28.343195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:48.219 { 00:15:48.219 "params": { 00:15:48.219 "name": "Nvme$subsystem", 00:15:48.219 "trtype": "$TEST_TRANSPORT", 00:15:48.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.219 "adrfam": "ipv4", 00:15:48.219 "trsvcid": "$NVMF_PORT", 00:15:48.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.219 "hdgst": ${hdgst:-false}, 00:15:48.219 "ddgst": ${ddgst:-false} 00:15:48.219 }, 00:15:48.219 "method": "bdev_nvme_attach_controller" 00:15:48.219 } 00:15:48.219 EOF 00:15:48.219 )") 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:48.219 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:48.219 "params": { 00:15:48.219 "name": "Nvme1", 00:15:48.219 "trtype": "tcp", 00:15:48.219 "traddr": "10.0.0.3", 00:15:48.219 "adrfam": "ipv4", 00:15:48.219 "trsvcid": "4420", 00:15:48.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.219 "hdgst": false, 00:15:48.219 "ddgst": false 00:15:48.219 }, 00:15:48.219 "method": "bdev_nvme_attach_controller" 00:15:48.219 }' 00:15:48.219 [2024-12-16 05:33:28.458699] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:48.219 [2024-12-16 05:33:28.458850] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid75449 ] 00:15:48.478 [2024-12-16 05:33:28.678843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.736 [2024-12-16 05:33:28.852653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.736 [2024-12-16 05:33:28.852769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.736 [2024-12-16 05:33:28.852783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.994 [2024-12-16 05:33:29.048294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.253 I/O targets: 00:15:49.253 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:49.253 00:15:49.253 00:15:49.253 CUnit - A unit testing framework for C - Version 2.1-3 00:15:49.253 http://cunit.sourceforge.net/ 00:15:49.253 00:15:49.253 00:15:49.253 Suite: bdevio tests on: Nvme1n1 00:15:49.253 Test: blockdev write read block ...passed 00:15:49.253 Test: blockdev write zeroes read block ...passed 00:15:49.253 Test: blockdev write zeroes read no split ...passed 00:15:49.253 Test: blockdev write zeroes read split ...passed 00:15:49.253 Test: blockdev write zeroes read split partial ...passed 00:15:49.253 Test: blockdev reset ...[2024-12-16 05:33:29.405118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:49.253 [2024-12-16 05:33:29.405292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:15:49.253 [2024-12-16 05:33:29.426653] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:49.253 passed 00:15:49.253 Test: blockdev write read 8 blocks ...passed 00:15:49.253 Test: blockdev write read size > 128k ...passed 00:15:49.253 Test: blockdev write read invalid size ...passed 00:15:49.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:49.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:49.253 Test: blockdev write read max offset ...passed 00:15:49.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:49.253 Test: blockdev writev readv 8 blocks ...passed 00:15:49.253 Test: blockdev writev readv 30 x 1block ...passed 00:15:49.253 Test: blockdev writev readv block ...passed 00:15:49.253 Test: blockdev writev readv size > 128k ...passed 00:15:49.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:49.253 Test: blockdev comparev and writev ...[2024-12-16 05:33:29.440392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.253 [2024-12-16 05:33:29.440484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:49.253 [2024-12-16 05:33:29.440533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.253 [2024-12-16 05:33:29.440561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:49.253 [2024-12-16 05:33:29.441129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.253 [2024-12-16 05:33:29.441182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:49.253 [2024-12-16 05:33:29.441212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.253 [2024-12-16 05:33:29.441233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:49.253 [2024-12-16 05:33:29.441696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.253 [2024-12-16 05:33:29.441745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:49.253 [2024-12-16 05:33:29.441774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.253 [2024-12-16 05:33:29.441802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:49.253 [2024-12-16 05:33:29.442303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.254 [2024-12-16 05:33:29.442352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:49.254 [2024-12-16 05:33:29.442381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:49.254 [2024-12-16 05:33:29.442403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:49.254 passed 00:15:49.254 Test: blockdev nvme passthru rw ...passed 00:15:49.254 Test: blockdev nvme passthru vendor specific ...[2024-12-16 05:33:29.443757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:49.254 [2024-12-16 05:33:29.443812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:49.254 [2024-12-16 05:33:29.443982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:49.254 [2024-12-16 05:33:29.444013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:49.254 [2024-12-16 05:33:29.444169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:49.254 [2024-12-16 05:33:29.444201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:49.254 [2024-12-16 05:33:29.444348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:49.254 [2024-12-16 05:33:29.444389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:49.254 passed 00:15:49.254 Test: blockdev nvme admin passthru ...passed 00:15:49.254 Test: blockdev copy ...passed 00:15:49.254 00:15:49.254 Run Summary: Type Total Ran Passed Failed Inactive 00:15:49.254 suites 1 1 n/a 0 0 00:15:49.254 tests 23 23 23 0 0 00:15:49.254 asserts 152 152 152 0 n/a 00:15:49.254 00:15:49.254 Elapsed time = 0.264 seconds 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.215 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.216 rmmod nvme_tcp 00:15:50.216 rmmod nvme_fabrics 00:15:50.216 rmmod nvme_keyring 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 75409 ']' 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 75409 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 75409 ']' 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 75409 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75409 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:50.216 killing process with pid 75409 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75409' 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 75409 00:15:50.216 05:33:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 75409 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:51.152 00:15:51.152 real 0m4.975s 00:15:51.152 user 0m17.066s 00:15:51.152 sys 0m1.587s 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.152 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:51.152 ************************************ 00:15:51.152 END TEST nvmf_bdevio_no_huge 00:15:51.152 ************************************ 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.412 ************************************ 00:15:51.412 START TEST nvmf_tls 00:15:51.412 ************************************ 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:51.412 * Looking for test storage... 00:15:51.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.412 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.413 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:51.413 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:51.673 Cannot find device "nvmf_init_br" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:51.673 Cannot find device "nvmf_init_br2" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:51.673 Cannot find device "nvmf_tgt_br" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.673 Cannot find device "nvmf_tgt_br2" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:51.673 Cannot find device "nvmf_init_br" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:51.673 Cannot find device "nvmf_init_br2" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:51.673 Cannot find device "nvmf_tgt_br" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:51.673 Cannot find device "nvmf_tgt_br2" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:51.673 Cannot find device "nvmf_br" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:51.673 Cannot find device "nvmf_init_if" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:51.673 Cannot find device "nvmf_init_if2" 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:51.673 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.933 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:51.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:51.933 00:15:51.933 --- 10.0.0.3 ping statistics --- 00:15:51.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.933 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:51.933 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:51.933 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:51.933 00:15:51.933 --- 10.0.0.4 ping statistics --- 00:15:51.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.933 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:51.933 00:15:51.933 --- 10.0.0.1 ping statistics --- 00:15:51.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.933 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:51.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:51.933 00:15:51.933 --- 10.0.0.2 ping statistics --- 00:15:51.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.933 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75723 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75723 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75723 ']' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.933 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.193 [2024-12-16 05:33:32.214809] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:52.193 [2024-12-16 05:33:32.215226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.193 [2024-12-16 05:33:32.408894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.452 [2024-12-16 05:33:32.534478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.452 [2024-12-16 05:33:32.534876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.452 [2024-12-16 05:33:32.535076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.452 [2024-12-16 05:33:32.535387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.452 [2024-12-16 05:33:32.535423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.452 [2024-12-16 05:33:32.537101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:53.021 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:53.280 true 00:15:53.280 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.280 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:53.539 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:53.539 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:53.539 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:53.798 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:53.798 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.056 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:54.056 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:54.056 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:54.315 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.315 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:54.883 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:54.883 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:54.883 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.883 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:54.883 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:54.883 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:54.883 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:55.141 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:55.141 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:55.399 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:55.399 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:55.399 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:55.658 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:55.658 05:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:55.917 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ze7Ebd6bk8 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.7D8A1lNJnJ 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ze7Ebd6bk8 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.7D8A1lNJnJ 00:15:56.176 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:56.435 05:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:57.002 [2024-12-16 05:33:36.961742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.002 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ze7Ebd6bk8 00:15:57.002 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ze7Ebd6bk8 00:15:57.002 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:57.261 [2024-12-16 05:33:37.387649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.261 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:57.520 05:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:57.779 [2024-12-16 05:33:38.036016] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:57.779 [2024-12-16 05:33:38.036405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.037 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:58.296 malloc0 00:15:58.296 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:58.555 05:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ze7Ebd6bk8 00:15:58.814 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:59.072 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ze7Ebd6bk8 00:16:11.278 Initializing NVMe Controllers 00:16:11.278 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:11.278 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:11.278 Initialization complete. Launching workers. 00:16:11.278 ======================================================== 00:16:11.278 Latency(us) 00:16:11.278 Device Information : IOPS MiB/s Average min max 00:16:11.278 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7225.65 28.23 8859.70 2567.80 16326.43 00:16:11.278 ======================================================== 00:16:11.278 Total : 7225.65 28.23 8859.70 2567.80 16326.43 00:16:11.278 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ze7Ebd6bk8 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ze7Ebd6bk8 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75972 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75972 /var/tmp/bdevperf.sock 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75972 ']' 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.278 05:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.278 [2024-12-16 05:33:49.761390] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:11.278 [2024-12-16 05:33:49.761616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75972 ] 00:16:11.278 [2024-12-16 05:33:49.951896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.278 [2024-12-16 05:33:50.081772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.278 [2024-12-16 05:33:50.262191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:11.278 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.278 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:11.278 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ze7Ebd6bk8 00:16:11.278 05:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:11.278 [2024-12-16 05:33:51.252110] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.278 TLSTESTn1 00:16:11.278 05:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:11.278 Running I/O for 10 seconds... 00:16:13.589 3014.00 IOPS, 11.77 MiB/s [2024-12-16T05:33:54.782Z] 3001.00 IOPS, 11.72 MiB/s [2024-12-16T05:33:55.731Z] 2916.67 IOPS, 11.39 MiB/s [2024-12-16T05:33:56.668Z] 2963.50 IOPS, 11.58 MiB/s [2024-12-16T05:33:57.604Z] 2994.80 IOPS, 11.70 MiB/s [2024-12-16T05:33:58.606Z] 2975.17 IOPS, 11.62 MiB/s [2024-12-16T05:33:59.544Z] 2971.57 IOPS, 11.61 MiB/s [2024-12-16T05:34:00.480Z] 2964.12 IOPS, 11.58 MiB/s [2024-12-16T05:34:01.858Z] 2958.11 IOPS, 11.56 MiB/s [2024-12-16T05:34:01.858Z] 2955.20 IOPS, 11.54 MiB/s 00:16:21.599 Latency(us) 00:16:21.599 [2024-12-16T05:34:01.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.599 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:21.599 Verification LBA range: start 0x0 length 0x2000 00:16:21.599 TLSTESTn1 : 10.02 2961.48 11.57 0.00 0.00 43137.34 8043.05 32410.53 00:16:21.599 [2024-12-16T05:34:01.858Z] =================================================================================================================== 00:16:21.599 [2024-12-16T05:34:01.858Z] Total : 2961.48 11.57 0.00 0.00 43137.34 8043.05 32410.53 00:16:21.599 { 00:16:21.599 "results": [ 00:16:21.599 { 00:16:21.599 "job": "TLSTESTn1", 00:16:21.599 "core_mask": "0x4", 00:16:21.599 "workload": "verify", 00:16:21.599 "status": "finished", 00:16:21.599 "verify_range": { 00:16:21.599 "start": 0, 00:16:21.599 "length": 8192 00:16:21.599 }, 00:16:21.599 "queue_depth": 128, 00:16:21.599 "io_size": 4096, 00:16:21.599 "runtime": 10.021352, 00:16:21.599 "iops": 2961.4766550461454, 00:16:21.599 "mibps": 11.568268183774006, 00:16:21.599 "io_failed": 0, 00:16:21.599 "io_timeout": 0, 00:16:21.599 "avg_latency_us": 43137.339840347, 00:16:21.599 "min_latency_us": 8043.054545454545, 00:16:21.599 "max_latency_us": 32410.53090909091 00:16:21.599 } 00:16:21.599 ], 00:16:21.599 "core_count": 1 00:16:21.599 } 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 75972 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75972 ']' 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75972 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75972 00:16:21.599 killing process with pid 75972 00:16:21.599 Received shutdown signal, test time was about 10.000000 seconds 00:16:21.599 00:16:21.599 Latency(us) 00:16:21.599 [2024-12-16T05:34:01.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.599 [2024-12-16T05:34:01.858Z] =================================================================================================================== 00:16:21.599 [2024-12-16T05:34:01.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75972' 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75972 00:16:21.599 05:34:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75972 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7D8A1lNJnJ 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7D8A1lNJnJ 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7D8A1lNJnJ 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7D8A1lNJnJ 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76123 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76123 /var/tmp/bdevperf.sock 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76123 ']' 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.536 05:34:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.536 [2024-12-16 05:34:02.601190] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:22.536 [2024-12-16 05:34:02.601615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76123 ] 00:16:22.536 [2024-12-16 05:34:02.766763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.796 [2024-12-16 05:34:02.860577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.796 [2024-12-16 05:34:03.020917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.364 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.364 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:23.364 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7D8A1lNJnJ 00:16:23.623 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:23.882 [2024-12-16 05:34:04.136174] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:24.142 [2024-12-16 05:34:04.145899] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:24.142 [2024-12-16 05:34:04.145999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:24.142 [2024-12-16 05:34:04.146955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:24.142 [2024-12-16 05:34:04.147950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:24.142 [2024-12-16 05:34:04.147994] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:24.142 [2024-12-16 05:34:04.148018] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:24.142 [2024-12-16 05:34:04.148035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:24.142 request: 00:16:24.142 { 00:16:24.142 "name": "TLSTEST", 00:16:24.142 "trtype": "tcp", 00:16:24.142 "traddr": "10.0.0.3", 00:16:24.142 "adrfam": "ipv4", 00:16:24.142 "trsvcid": "4420", 00:16:24.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.142 "prchk_reftag": false, 00:16:24.142 "prchk_guard": false, 00:16:24.142 "hdgst": false, 00:16:24.142 "ddgst": false, 00:16:24.142 "psk": "key0", 00:16:24.142 "allow_unrecognized_csi": false, 00:16:24.142 "method": "bdev_nvme_attach_controller", 00:16:24.142 "req_id": 1 00:16:24.142 } 00:16:24.142 Got JSON-RPC error response 00:16:24.142 response: 00:16:24.142 { 00:16:24.142 "code": -5, 00:16:24.142 "message": "Input/output error" 00:16:24.142 } 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76123 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76123 ']' 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76123 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76123 00:16:24.142 killing process with pid 76123 00:16:24.142 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.142 00:16:24.142 Latency(us) 00:16:24.142 [2024-12-16T05:34:04.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.142 [2024-12-16T05:34:04.401Z] =================================================================================================================== 00:16:24.142 [2024-12-16T05:34:04.401Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76123' 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76123 00:16:24.142 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76123 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ze7Ebd6bk8 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ze7Ebd6bk8 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ze7Ebd6bk8 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ze7Ebd6bk8 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76158 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76158 /var/tmp/bdevperf.sock 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76158 ']' 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.080 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.080 [2024-12-16 05:34:05.134645] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:25.080 [2024-12-16 05:34:05.135072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76158 ] 00:16:25.080 [2024-12-16 05:34:05.303280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.338 [2024-12-16 05:34:05.400253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.338 [2024-12-16 05:34:05.566254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.904 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.904 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:25.904 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ze7Ebd6bk8 00:16:26.164 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:26.423 [2024-12-16 05:34:06.556676] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.423 [2024-12-16 05:34:06.565329] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:26.423 [2024-12-16 05:34:06.565577] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:26.423 [2024-12-16 05:34:06.565850] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:26.423 [2024-12-16 05:34:06.566729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:26.423 [2024-12-16 05:34:06.567706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:26.423 [2024-12-16 05:34:06.568696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:26.423 [2024-12-16 05:34:06.568745] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:26.423 [2024-12-16 05:34:06.568763] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:26.423 [2024-12-16 05:34:06.568780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:26.423 request: 00:16:26.423 { 00:16:26.423 "name": "TLSTEST", 00:16:26.423 "trtype": "tcp", 00:16:26.423 "traddr": "10.0.0.3", 00:16:26.423 "adrfam": "ipv4", 00:16:26.423 "trsvcid": "4420", 00:16:26.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:26.423 "prchk_reftag": false, 00:16:26.423 "prchk_guard": false, 00:16:26.423 "hdgst": false, 00:16:26.423 "ddgst": false, 00:16:26.423 "psk": "key0", 00:16:26.423 "allow_unrecognized_csi": false, 00:16:26.423 "method": "bdev_nvme_attach_controller", 00:16:26.423 "req_id": 1 00:16:26.423 } 00:16:26.423 Got JSON-RPC error response 00:16:26.423 response: 00:16:26.423 { 00:16:26.423 "code": -5, 00:16:26.423 "message": "Input/output error" 00:16:26.423 } 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76158 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76158 ']' 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76158 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76158 00:16:26.423 killing process with pid 76158 00:16:26.423 Received shutdown signal, test time was about 10.000000 seconds 00:16:26.423 00:16:26.423 Latency(us) 00:16:26.423 [2024-12-16T05:34:06.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.423 [2024-12-16T05:34:06.682Z] =================================================================================================================== 00:16:26.423 [2024-12-16T05:34:06.682Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76158' 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76158 00:16:26.423 05:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76158 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ze7Ebd6bk8 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ze7Ebd6bk8 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ze7Ebd6bk8 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ze7Ebd6bk8 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76199 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76199 /var/tmp/bdevperf.sock 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76199 ']' 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.366 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.367 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.367 05:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.626 [2024-12-16 05:34:07.648879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:27.626 [2024-12-16 05:34:07.649348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76199 ] 00:16:27.626 [2024-12-16 05:34:07.828109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.885 [2024-12-16 05:34:07.920764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.885 [2024-12-16 05:34:08.082555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:28.452 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.452 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.452 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ze7Ebd6bk8 00:16:28.711 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:28.970 [2024-12-16 05:34:09.055766] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:28.970 [2024-12-16 05:34:09.065702] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:28.970 [2024-12-16 05:34:09.065749] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:28.970 [2024-12-16 05:34:09.065828] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:28.970 [2024-12-16 05:34:09.065961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:28.970 [2024-12-16 05:34:09.066937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:28.970 [2024-12-16 05:34:09.067929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:28.970 [2024-12-16 05:34:09.067986] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:28.970 [2024-12-16 05:34:09.068007] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:28.970 [2024-12-16 05:34:09.068026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:28.970 request: 00:16:28.970 { 00:16:28.970 "name": "TLSTEST", 00:16:28.970 "trtype": "tcp", 00:16:28.970 "traddr": "10.0.0.3", 00:16:28.970 "adrfam": "ipv4", 00:16:28.970 "trsvcid": "4420", 00:16:28.970 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:28.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.970 "prchk_reftag": false, 00:16:28.970 "prchk_guard": false, 00:16:28.970 "hdgst": false, 00:16:28.970 "ddgst": false, 00:16:28.970 "psk": "key0", 00:16:28.970 "allow_unrecognized_csi": false, 00:16:28.970 "method": "bdev_nvme_attach_controller", 00:16:28.970 "req_id": 1 00:16:28.970 } 00:16:28.970 Got JSON-RPC error response 00:16:28.970 response: 00:16:28.970 { 00:16:28.970 "code": -5, 00:16:28.970 "message": "Input/output error" 00:16:28.970 } 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76199 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76199 ']' 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76199 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76199 00:16:28.970 killing process with pid 76199 00:16:28.970 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.970 00:16:28.970 Latency(us) 00:16:28.970 [2024-12-16T05:34:09.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.970 [2024-12-16T05:34:09.229Z] =================================================================================================================== 00:16:28.970 [2024-12-16T05:34:09.229Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76199' 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76199 00:16:28.970 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76199 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76234 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76234 /var/tmp/bdevperf.sock 00:16:29.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76234 ']' 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.907 05:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.907 [2024-12-16 05:34:10.051532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:29.907 [2024-12-16 05:34:10.052007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76234 ] 00:16:30.167 [2024-12-16 05:34:10.232236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.167 [2024-12-16 05:34:10.321599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.425 [2024-12-16 05:34:10.484246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:30.993 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.993 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:30.993 05:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:30.993 [2024-12-16 05:34:11.186374] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:30.993 [2024-12-16 05:34:11.186426] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:30.993 request: 00:16:30.993 { 00:16:30.993 "name": "key0", 00:16:30.993 "path": "", 00:16:30.993 "method": "keyring_file_add_key", 00:16:30.993 "req_id": 1 00:16:30.993 } 00:16:30.993 Got JSON-RPC error response 00:16:30.993 response: 00:16:30.993 { 00:16:30.993 "code": -1, 00:16:30.993 "message": "Operation not permitted" 00:16:30.993 } 00:16:30.993 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:31.252 [2024-12-16 05:34:11.466616] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.252 [2024-12-16 05:34:11.466703] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:31.252 request: 00:16:31.252 { 00:16:31.252 "name": "TLSTEST", 00:16:31.252 "trtype": "tcp", 00:16:31.252 "traddr": "10.0.0.3", 00:16:31.252 "adrfam": "ipv4", 00:16:31.252 "trsvcid": "4420", 00:16:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.252 "prchk_reftag": false, 00:16:31.252 "prchk_guard": false, 00:16:31.252 "hdgst": false, 00:16:31.252 "ddgst": false, 00:16:31.252 "psk": "key0", 00:16:31.252 "allow_unrecognized_csi": false, 00:16:31.252 "method": "bdev_nvme_attach_controller", 00:16:31.252 "req_id": 1 00:16:31.252 } 00:16:31.252 Got JSON-RPC error response 00:16:31.252 response: 00:16:31.252 { 00:16:31.252 "code": -126, 00:16:31.252 "message": "Required key not available" 00:16:31.252 } 00:16:31.252 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76234 00:16:31.252 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76234 ']' 00:16:31.252 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76234 00:16:31.252 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:31.252 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.252 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76234 00:16:31.511 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:31.511 killing process with pid 76234 00:16:31.511 Received shutdown signal, test time was about 10.000000 seconds 00:16:31.511 00:16:31.511 Latency(us) 00:16:31.511 [2024-12-16T05:34:11.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.511 [2024-12-16T05:34:11.770Z] =================================================================================================================== 00:16:31.511 [2024-12-16T05:34:11.770Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:31.511 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:31.511 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76234' 00:16:31.511 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76234 00:16:31.511 05:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76234 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 75723 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75723 ']' 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75723 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75723 00:16:32.448 killing process with pid 75723 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75723' 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75723 00:16:32.448 05:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75723 00:16:33.415 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:33.415 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:33.415 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:33.415 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:33.415 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:33.415 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:33.415 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.LAIyw6BYMS 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.LAIyw6BYMS 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76297 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76297 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76297 ']' 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.674 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.675 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.675 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.675 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.675 [2024-12-16 05:34:13.826838] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:33.675 [2024-12-16 05:34:13.827245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.933 [2024-12-16 05:34:14.009445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.933 [2024-12-16 05:34:14.112016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.934 [2024-12-16 05:34:14.112231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.934 [2024-12-16 05:34:14.112386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.934 [2024-12-16 05:34:14.112655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.934 [2024-12-16 05:34:14.112709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.934 [2024-12-16 05:34:14.113985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.192 [2024-12-16 05:34:14.300369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.LAIyw6BYMS 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LAIyw6BYMS 00:16:34.760 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:35.018 [2024-12-16 05:34:15.135354] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.018 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:35.277 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:35.844 [2024-12-16 05:34:15.803601] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:35.844 [2024-12-16 05:34:15.803950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:35.844 05:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:36.103 malloc0 00:16:36.103 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:36.362 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:16:36.620 05:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LAIyw6BYMS 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LAIyw6BYMS 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76358 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76358 /var/tmp/bdevperf.sock 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76358 ']' 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.879 05:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.138 [2024-12-16 05:34:17.217009] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:37.138 [2024-12-16 05:34:17.217376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76358 ] 00:16:37.396 [2024-12-16 05:34:17.399124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.396 [2024-12-16 05:34:17.525032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.655 [2024-12-16 05:34:17.740956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:38.231 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.231 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:38.231 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:16:38.489 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:38.747 [2024-12-16 05:34:18.808658] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:38.747 TLSTESTn1 00:16:38.747 05:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:39.005 Running I/O for 10 seconds... 00:16:40.876 2816.00 IOPS, 11.00 MiB/s [2024-12-16T05:34:22.072Z] 2817.00 IOPS, 11.00 MiB/s [2024-12-16T05:34:23.448Z] 2880.33 IOPS, 11.25 MiB/s [2024-12-16T05:34:24.385Z] 2919.25 IOPS, 11.40 MiB/s [2024-12-16T05:34:25.320Z] 2946.80 IOPS, 11.51 MiB/s [2024-12-16T05:34:26.256Z] 2971.50 IOPS, 11.61 MiB/s [2024-12-16T05:34:27.194Z] 2987.00 IOPS, 11.67 MiB/s [2024-12-16T05:34:28.130Z] 3001.38 IOPS, 11.72 MiB/s [2024-12-16T05:34:29.066Z] 3004.89 IOPS, 11.74 MiB/s [2024-12-16T05:34:29.066Z] 3013.70 IOPS, 11.77 MiB/s 00:16:48.807 Latency(us) 00:16:48.807 [2024-12-16T05:34:29.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.807 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:48.807 Verification LBA range: start 0x0 length 0x2000 00:16:48.807 TLSTESTn1 : 10.02 3020.04 11.80 0.00 0.00 42303.74 7357.91 30742.34 00:16:48.807 [2024-12-16T05:34:29.066Z] =================================================================================================================== 00:16:48.807 [2024-12-16T05:34:29.066Z] Total : 3020.04 11.80 0.00 0.00 42303.74 7357.91 30742.34 00:16:48.807 { 00:16:48.807 "results": [ 00:16:48.807 { 00:16:48.807 "job": "TLSTESTn1", 00:16:48.807 "core_mask": "0x4", 00:16:48.807 "workload": "verify", 00:16:48.807 "status": "finished", 00:16:48.807 "verify_range": { 00:16:48.807 "start": 0, 00:16:48.807 "length": 8192 00:16:48.807 }, 00:16:48.807 "queue_depth": 128, 00:16:48.807 "io_size": 4096, 00:16:48.807 "runtime": 10.020733, 00:16:48.807 "iops": 3020.0385540658554, 00:16:48.807 "mibps": 11.797025601819747, 00:16:48.807 "io_failed": 0, 00:16:48.807 "io_timeout": 0, 00:16:48.807 "avg_latency_us": 42303.744125349585, 00:16:48.807 "min_latency_us": 7357.905454545455, 00:16:48.807 "max_latency_us": 30742.34181818182 00:16:48.807 } 00:16:48.807 ], 00:16:48.807 "core_count": 1 00:16:48.807 } 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 76358 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76358 ']' 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76358 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76358 00:16:49.067 killing process with pid 76358 00:16:49.067 Received shutdown signal, test time was about 10.000000 seconds 00:16:49.067 00:16:49.067 Latency(us) 00:16:49.067 [2024-12-16T05:34:29.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.067 [2024-12-16T05:34:29.326Z] =================================================================================================================== 00:16:49.067 [2024-12-16T05:34:29.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76358' 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76358 00:16:49.067 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76358 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.LAIyw6BYMS 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LAIyw6BYMS 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LAIyw6BYMS 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:50.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.003 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LAIyw6BYMS 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LAIyw6BYMS 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76506 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76506 /var/tmp/bdevperf.sock 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76506 ']' 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.004 05:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.004 [2024-12-16 05:34:30.070885] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:50.004 [2024-12-16 05:34:30.071339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76506 ] 00:16:50.004 [2024-12-16 05:34:30.250617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.263 [2024-12-16 05:34:30.384831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.522 [2024-12-16 05:34:30.540384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:50.780 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.780 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:50.780 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:16:51.039 [2024-12-16 05:34:31.205636] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LAIyw6BYMS': 0100666 00:16:51.039 [2024-12-16 05:34:31.206054] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:51.039 request: 00:16:51.039 { 00:16:51.039 "name": "key0", 00:16:51.039 "path": "/tmp/tmp.LAIyw6BYMS", 00:16:51.039 "method": "keyring_file_add_key", 00:16:51.039 "req_id": 1 00:16:51.039 } 00:16:51.039 Got JSON-RPC error response 00:16:51.039 response: 00:16:51.039 { 00:16:51.039 "code": -1, 00:16:51.039 "message": "Operation not permitted" 00:16:51.039 } 00:16:51.039 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:51.298 [2024-12-16 05:34:31.514042] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.298 [2024-12-16 05:34:31.514407] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:51.298 request: 00:16:51.298 { 00:16:51.298 "name": "TLSTEST", 00:16:51.298 "trtype": "tcp", 00:16:51.298 "traddr": "10.0.0.3", 00:16:51.298 "adrfam": "ipv4", 00:16:51.298 "trsvcid": "4420", 00:16:51.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.298 "prchk_reftag": false, 00:16:51.298 "prchk_guard": false, 00:16:51.298 "hdgst": false, 00:16:51.298 "ddgst": false, 00:16:51.298 "psk": "key0", 00:16:51.298 "allow_unrecognized_csi": false, 00:16:51.298 "method": "bdev_nvme_attach_controller", 00:16:51.298 "req_id": 1 00:16:51.298 } 00:16:51.298 Got JSON-RPC error response 00:16:51.298 response: 00:16:51.298 { 00:16:51.298 "code": -126, 00:16:51.298 "message": "Required key not available" 00:16:51.298 } 00:16:51.298 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 76506 00:16:51.298 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76506 ']' 00:16:51.298 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76506 00:16:51.298 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:51.298 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.298 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76506 00:16:51.557 killing process with pid 76506 00:16:51.557 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.557 00:16:51.557 Latency(us) 00:16:51.557 [2024-12-16T05:34:31.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.557 [2024-12-16T05:34:31.816Z] =================================================================================================================== 00:16:51.557 [2024-12-16T05:34:31.816Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:51.557 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:51.557 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:51.557 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76506' 00:16:51.557 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76506 00:16:51.557 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76506 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 76297 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76297 ']' 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76297 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76297 00:16:52.493 killing process with pid 76297 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76297' 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76297 00:16:52.493 05:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76297 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76558 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76558 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76558 ']' 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.429 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.688 [2024-12-16 05:34:33.773815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:53.688 [2024-12-16 05:34:33.773933] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.947 [2024-12-16 05:34:33.956208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.947 [2024-12-16 05:34:34.078895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.947 [2024-12-16 05:34:34.078951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.947 [2024-12-16 05:34:34.078984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.947 [2024-12-16 05:34:34.079005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.947 [2024-12-16 05:34:34.079018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.947 [2024-12-16 05:34:34.080254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.206 [2024-12-16 05:34:34.264054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.LAIyw6BYMS 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LAIyw6BYMS 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.LAIyw6BYMS 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LAIyw6BYMS 00:16:54.773 05:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:55.031 [2024-12-16 05:34:35.077225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.031 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:55.289 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:55.548 [2024-12-16 05:34:35.561443] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:55.548 [2024-12-16 05:34:35.562150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:55.548 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:55.807 malloc0 00:16:55.807 05:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:56.069 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:16:56.328 [2024-12-16 05:34:36.329744] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LAIyw6BYMS': 0100666 00:16:56.328 [2024-12-16 05:34:36.329824] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:56.328 request: 00:16:56.328 { 00:16:56.328 "name": "key0", 00:16:56.328 "path": "/tmp/tmp.LAIyw6BYMS", 00:16:56.328 "method": "keyring_file_add_key", 00:16:56.328 "req_id": 1 00:16:56.328 } 00:16:56.328 Got JSON-RPC error response 00:16:56.328 response: 00:16:56.328 { 00:16:56.328 "code": -1, 00:16:56.328 "message": "Operation not permitted" 00:16:56.328 } 00:16:56.328 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:56.587 [2024-12-16 05:34:36.601905] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:56.587 [2024-12-16 05:34:36.601992] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:56.587 request: 00:16:56.587 { 00:16:56.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.587 "host": "nqn.2016-06.io.spdk:host1", 00:16:56.587 "psk": "key0", 00:16:56.587 "method": "nvmf_subsystem_add_host", 00:16:56.587 "req_id": 1 00:16:56.587 } 00:16:56.587 Got JSON-RPC error response 00:16:56.587 response: 00:16:56.587 { 00:16:56.587 "code": -32603, 00:16:56.587 "message": "Internal error" 00:16:56.587 } 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 76558 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76558 ']' 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76558 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76558 00:16:56.587 killing process with pid 76558 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76558' 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76558 00:16:56.587 05:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76558 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.LAIyw6BYMS 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76634 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76634 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76634 ']' 00:16:57.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.524 05:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.524 [2024-12-16 05:34:37.748181] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:57.524 [2024-12-16 05:34:37.748575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.783 [2024-12-16 05:34:37.921986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.783 [2024-12-16 05:34:38.011482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.783 [2024-12-16 05:34:38.011550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.783 [2024-12-16 05:34:38.011584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.783 [2024-12-16 05:34:38.011662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.783 [2024-12-16 05:34:38.011679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.783 [2024-12-16 05:34:38.012891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.042 [2024-12-16 05:34:38.189157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.LAIyw6BYMS 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LAIyw6BYMS 00:16:58.609 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:58.868 [2024-12-16 05:34:38.959256] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.868 05:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:59.193 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:59.452 [2024-12-16 05:34:39.443417] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:59.452 [2024-12-16 05:34:39.443854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:59.452 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:59.712 malloc0 00:16:59.712 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:59.971 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:17:00.230 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=76695 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 76695 /var/tmp/bdevperf.sock 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76695 ']' 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.489 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.489 [2024-12-16 05:34:40.652133] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:00.489 [2024-12-16 05:34:40.652572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76695 ] 00:17:00.749 [2024-12-16 05:34:40.827397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.749 [2024-12-16 05:34:40.952499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.008 [2024-12-16 05:34:41.119472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:01.575 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.575 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:01.575 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:17:01.575 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:01.834 [2024-12-16 05:34:42.015400] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.094 TLSTESTn1 00:17:02.094 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:02.353 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:02.353 "subsystems": [ 00:17:02.353 { 00:17:02.353 "subsystem": "keyring", 00:17:02.353 "config": [ 00:17:02.353 { 00:17:02.353 "method": "keyring_file_add_key", 00:17:02.353 "params": { 00:17:02.353 "name": "key0", 00:17:02.353 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:02.353 } 00:17:02.353 } 00:17:02.353 ] 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "subsystem": "iobuf", 00:17:02.353 "config": [ 00:17:02.353 { 00:17:02.353 "method": "iobuf_set_options", 00:17:02.353 "params": { 00:17:02.353 "small_pool_count": 8192, 00:17:02.353 "large_pool_count": 1024, 00:17:02.353 "small_bufsize": 8192, 00:17:02.353 "large_bufsize": 135168, 00:17:02.353 "enable_numa": false 00:17:02.353 } 00:17:02.353 } 00:17:02.353 ] 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "subsystem": "sock", 00:17:02.353 "config": [ 00:17:02.353 { 00:17:02.353 "method": "sock_set_default_impl", 00:17:02.353 "params": { 00:17:02.353 "impl_name": "uring" 00:17:02.353 } 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "method": "sock_impl_set_options", 00:17:02.353 "params": { 00:17:02.353 "impl_name": "ssl", 00:17:02.353 "recv_buf_size": 4096, 00:17:02.353 "send_buf_size": 4096, 00:17:02.353 "enable_recv_pipe": true, 00:17:02.353 "enable_quickack": false, 00:17:02.353 "enable_placement_id": 0, 00:17:02.353 "enable_zerocopy_send_server": true, 00:17:02.353 "enable_zerocopy_send_client": false, 00:17:02.353 "zerocopy_threshold": 0, 00:17:02.353 "tls_version": 0, 00:17:02.353 "enable_ktls": false 00:17:02.353 } 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "method": "sock_impl_set_options", 00:17:02.353 "params": { 00:17:02.353 "impl_name": "posix", 00:17:02.353 "recv_buf_size": 2097152, 00:17:02.353 "send_buf_size": 2097152, 00:17:02.353 "enable_recv_pipe": true, 00:17:02.353 "enable_quickack": false, 00:17:02.353 "enable_placement_id": 0, 00:17:02.353 "enable_zerocopy_send_server": true, 00:17:02.353 "enable_zerocopy_send_client": false, 00:17:02.353 "zerocopy_threshold": 0, 00:17:02.353 "tls_version": 0, 00:17:02.353 "enable_ktls": false 00:17:02.353 } 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "method": "sock_impl_set_options", 00:17:02.353 "params": { 00:17:02.353 "impl_name": "uring", 00:17:02.353 "recv_buf_size": 2097152, 00:17:02.353 "send_buf_size": 2097152, 00:17:02.353 "enable_recv_pipe": true, 00:17:02.353 "enable_quickack": false, 00:17:02.353 "enable_placement_id": 0, 00:17:02.353 "enable_zerocopy_send_server": false, 00:17:02.353 "enable_zerocopy_send_client": false, 00:17:02.353 "zerocopy_threshold": 0, 00:17:02.353 "tls_version": 0, 00:17:02.353 "enable_ktls": false 00:17:02.353 } 00:17:02.353 } 00:17:02.353 ] 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "subsystem": "vmd", 00:17:02.353 "config": [] 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "subsystem": "accel", 00:17:02.353 "config": [ 00:17:02.353 { 00:17:02.353 "method": "accel_set_options", 00:17:02.353 "params": { 00:17:02.353 "small_cache_size": 128, 00:17:02.353 "large_cache_size": 16, 00:17:02.353 "task_count": 2048, 00:17:02.353 "sequence_count": 2048, 00:17:02.353 "buf_count": 2048 00:17:02.353 } 00:17:02.353 } 00:17:02.353 ] 00:17:02.353 }, 00:17:02.353 { 00:17:02.353 "subsystem": "bdev", 00:17:02.354 "config": [ 00:17:02.354 { 00:17:02.354 "method": "bdev_set_options", 00:17:02.354 "params": { 00:17:02.354 "bdev_io_pool_size": 65535, 00:17:02.354 "bdev_io_cache_size": 256, 00:17:02.354 "bdev_auto_examine": true, 00:17:02.354 "iobuf_small_cache_size": 128, 00:17:02.354 "iobuf_large_cache_size": 16 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "bdev_raid_set_options", 00:17:02.354 "params": { 00:17:02.354 "process_window_size_kb": 1024, 00:17:02.354 "process_max_bandwidth_mb_sec": 0 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "bdev_iscsi_set_options", 00:17:02.354 "params": { 00:17:02.354 "timeout_sec": 30 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "bdev_nvme_set_options", 00:17:02.354 "params": { 00:17:02.354 "action_on_timeout": "none", 00:17:02.354 "timeout_us": 0, 00:17:02.354 "timeout_admin_us": 0, 00:17:02.354 "keep_alive_timeout_ms": 10000, 00:17:02.354 "arbitration_burst": 0, 00:17:02.354 "low_priority_weight": 0, 00:17:02.354 "medium_priority_weight": 0, 00:17:02.354 "high_priority_weight": 0, 00:17:02.354 "nvme_adminq_poll_period_us": 10000, 00:17:02.354 "nvme_ioq_poll_period_us": 0, 00:17:02.354 "io_queue_requests": 0, 00:17:02.354 "delay_cmd_submit": true, 00:17:02.354 "transport_retry_count": 4, 00:17:02.354 "bdev_retry_count": 3, 00:17:02.354 "transport_ack_timeout": 0, 00:17:02.354 "ctrlr_loss_timeout_sec": 0, 00:17:02.354 "reconnect_delay_sec": 0, 00:17:02.354 "fast_io_fail_timeout_sec": 0, 00:17:02.354 "disable_auto_failback": false, 00:17:02.354 "generate_uuids": false, 00:17:02.354 "transport_tos": 0, 00:17:02.354 "nvme_error_stat": false, 00:17:02.354 "rdma_srq_size": 0, 00:17:02.354 "io_path_stat": false, 00:17:02.354 "allow_accel_sequence": false, 00:17:02.354 "rdma_max_cq_size": 0, 00:17:02.354 "rdma_cm_event_timeout_ms": 0, 00:17:02.354 "dhchap_digests": [ 00:17:02.354 "sha256", 00:17:02.354 "sha384", 00:17:02.354 "sha512" 00:17:02.354 ], 00:17:02.354 "dhchap_dhgroups": [ 00:17:02.354 "null", 00:17:02.354 "ffdhe2048", 00:17:02.354 "ffdhe3072", 00:17:02.354 "ffdhe4096", 00:17:02.354 "ffdhe6144", 00:17:02.354 "ffdhe8192" 00:17:02.354 ], 00:17:02.354 "rdma_umr_per_io": false 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "bdev_nvme_set_hotplug", 00:17:02.354 "params": { 00:17:02.354 "period_us": 100000, 00:17:02.354 "enable": false 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "bdev_malloc_create", 00:17:02.354 "params": { 00:17:02.354 "name": "malloc0", 00:17:02.354 "num_blocks": 8192, 00:17:02.354 "block_size": 4096, 00:17:02.354 "physical_block_size": 4096, 00:17:02.354 "uuid": "5f913df8-692a-4d3c-b4a5-d27433f32d37", 00:17:02.354 "optimal_io_boundary": 0, 00:17:02.354 "md_size": 0, 00:17:02.354 "dif_type": 0, 00:17:02.354 "dif_is_head_of_md": false, 00:17:02.354 "dif_pi_format": 0 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "bdev_wait_for_examine" 00:17:02.354 } 00:17:02.354 ] 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "subsystem": "nbd", 00:17:02.354 "config": [] 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "subsystem": "scheduler", 00:17:02.354 "config": [ 00:17:02.354 { 00:17:02.354 "method": "framework_set_scheduler", 00:17:02.354 "params": { 00:17:02.354 "name": "static" 00:17:02.354 } 00:17:02.354 } 00:17:02.354 ] 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "subsystem": "nvmf", 00:17:02.354 "config": [ 00:17:02.354 { 00:17:02.354 "method": "nvmf_set_config", 00:17:02.354 "params": { 00:17:02.354 "discovery_filter": "match_any", 00:17:02.354 "admin_cmd_passthru": { 00:17:02.354 "identify_ctrlr": false 00:17:02.354 }, 00:17:02.354 "dhchap_digests": [ 00:17:02.354 "sha256", 00:17:02.354 "sha384", 00:17:02.354 "sha512" 00:17:02.354 ], 00:17:02.354 "dhchap_dhgroups": [ 00:17:02.354 "null", 00:17:02.354 "ffdhe2048", 00:17:02.354 "ffdhe3072", 00:17:02.354 "ffdhe4096", 00:17:02.354 "ffdhe6144", 00:17:02.354 "ffdhe8192" 00:17:02.354 ] 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "nvmf_set_max_subsystems", 00:17:02.354 "params": { 00:17:02.354 "max_subsystems": 1024 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "nvmf_set_crdt", 00:17:02.354 "params": { 00:17:02.354 "crdt1": 0, 00:17:02.354 "crdt2": 0, 00:17:02.354 "crdt3": 0 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "nvmf_create_transport", 00:17:02.354 "params": { 00:17:02.354 "trtype": "TCP", 00:17:02.354 "max_queue_depth": 128, 00:17:02.354 "max_io_qpairs_per_ctrlr": 127, 00:17:02.354 "in_capsule_data_size": 4096, 00:17:02.354 "max_io_size": 131072, 00:17:02.354 "io_unit_size": 131072, 00:17:02.354 "max_aq_depth": 128, 00:17:02.354 "num_shared_buffers": 511, 00:17:02.354 "buf_cache_size": 4294967295, 00:17:02.354 "dif_insert_or_strip": false, 00:17:02.354 "zcopy": false, 00:17:02.354 "c2h_success": false, 00:17:02.354 "sock_priority": 0, 00:17:02.354 "abort_timeout_sec": 1, 00:17:02.354 "ack_timeout": 0, 00:17:02.354 "data_wr_pool_size": 0 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "nvmf_create_subsystem", 00:17:02.354 "params": { 00:17:02.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.354 "allow_any_host": false, 00:17:02.354 "serial_number": "SPDK00000000000001", 00:17:02.354 "model_number": "SPDK bdev Controller", 00:17:02.354 "max_namespaces": 10, 00:17:02.354 "min_cntlid": 1, 00:17:02.354 "max_cntlid": 65519, 00:17:02.354 "ana_reporting": false 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "nvmf_subsystem_add_host", 00:17:02.354 "params": { 00:17:02.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.354 "host": "nqn.2016-06.io.spdk:host1", 00:17:02.354 "psk": "key0" 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "nvmf_subsystem_add_ns", 00:17:02.354 "params": { 00:17:02.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.354 "namespace": { 00:17:02.354 "nsid": 1, 00:17:02.354 "bdev_name": "malloc0", 00:17:02.354 "nguid": "5F913DF8692A4D3CB4A5D27433F32D37", 00:17:02.354 "uuid": "5f913df8-692a-4d3c-b4a5-d27433f32d37", 00:17:02.354 "no_auto_visible": false 00:17:02.354 } 00:17:02.354 } 00:17:02.354 }, 00:17:02.354 { 00:17:02.354 "method": "nvmf_subsystem_add_listener", 00:17:02.354 "params": { 00:17:02.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.354 "listen_address": { 00:17:02.354 "trtype": "TCP", 00:17:02.354 "adrfam": "IPv4", 00:17:02.354 "traddr": "10.0.0.3", 00:17:02.354 "trsvcid": "4420" 00:17:02.354 }, 00:17:02.354 "secure_channel": true 00:17:02.354 } 00:17:02.354 } 00:17:02.354 ] 00:17:02.354 } 00:17:02.354 ] 00:17:02.354 }' 00:17:02.354 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:02.614 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:02.614 "subsystems": [ 00:17:02.614 { 00:17:02.614 "subsystem": "keyring", 00:17:02.614 "config": [ 00:17:02.614 { 00:17:02.614 "method": "keyring_file_add_key", 00:17:02.614 "params": { 00:17:02.614 "name": "key0", 00:17:02.614 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:02.614 } 00:17:02.614 } 00:17:02.614 ] 00:17:02.614 }, 00:17:02.614 { 00:17:02.614 "subsystem": "iobuf", 00:17:02.614 "config": [ 00:17:02.614 { 00:17:02.614 "method": "iobuf_set_options", 00:17:02.614 "params": { 00:17:02.614 "small_pool_count": 8192, 00:17:02.614 "large_pool_count": 1024, 00:17:02.614 "small_bufsize": 8192, 00:17:02.614 "large_bufsize": 135168, 00:17:02.614 "enable_numa": false 00:17:02.614 } 00:17:02.614 } 00:17:02.614 ] 00:17:02.614 }, 00:17:02.614 { 00:17:02.614 "subsystem": "sock", 00:17:02.614 "config": [ 00:17:02.614 { 00:17:02.614 "method": "sock_set_default_impl", 00:17:02.614 "params": { 00:17:02.614 "impl_name": "uring" 00:17:02.614 } 00:17:02.614 }, 00:17:02.614 { 00:17:02.614 "method": "sock_impl_set_options", 00:17:02.614 "params": { 00:17:02.614 "impl_name": "ssl", 00:17:02.615 "recv_buf_size": 4096, 00:17:02.615 "send_buf_size": 4096, 00:17:02.615 "enable_recv_pipe": true, 00:17:02.615 "enable_quickack": false, 00:17:02.615 "enable_placement_id": 0, 00:17:02.615 "enable_zerocopy_send_server": true, 00:17:02.615 "enable_zerocopy_send_client": false, 00:17:02.615 "zerocopy_threshold": 0, 00:17:02.615 "tls_version": 0, 00:17:02.615 "enable_ktls": false 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "sock_impl_set_options", 00:17:02.615 "params": { 00:17:02.615 "impl_name": "posix", 00:17:02.615 "recv_buf_size": 2097152, 00:17:02.615 "send_buf_size": 2097152, 00:17:02.615 "enable_recv_pipe": true, 00:17:02.615 "enable_quickack": false, 00:17:02.615 "enable_placement_id": 0, 00:17:02.615 "enable_zerocopy_send_server": true, 00:17:02.615 "enable_zerocopy_send_client": false, 00:17:02.615 "zerocopy_threshold": 0, 00:17:02.615 "tls_version": 0, 00:17:02.615 "enable_ktls": false 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "sock_impl_set_options", 00:17:02.615 "params": { 00:17:02.615 "impl_name": "uring", 00:17:02.615 "recv_buf_size": 2097152, 00:17:02.615 "send_buf_size": 2097152, 00:17:02.615 "enable_recv_pipe": true, 00:17:02.615 "enable_quickack": false, 00:17:02.615 "enable_placement_id": 0, 00:17:02.615 "enable_zerocopy_send_server": false, 00:17:02.615 "enable_zerocopy_send_client": false, 00:17:02.615 "zerocopy_threshold": 0, 00:17:02.615 "tls_version": 0, 00:17:02.615 "enable_ktls": false 00:17:02.615 } 00:17:02.615 } 00:17:02.615 ] 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "subsystem": "vmd", 00:17:02.615 "config": [] 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "subsystem": "accel", 00:17:02.615 "config": [ 00:17:02.615 { 00:17:02.615 "method": "accel_set_options", 00:17:02.615 "params": { 00:17:02.615 "small_cache_size": 128, 00:17:02.615 "large_cache_size": 16, 00:17:02.615 "task_count": 2048, 00:17:02.615 "sequence_count": 2048, 00:17:02.615 "buf_count": 2048 00:17:02.615 } 00:17:02.615 } 00:17:02.615 ] 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "subsystem": "bdev", 00:17:02.615 "config": [ 00:17:02.615 { 00:17:02.615 "method": "bdev_set_options", 00:17:02.615 "params": { 00:17:02.615 "bdev_io_pool_size": 65535, 00:17:02.615 "bdev_io_cache_size": 256, 00:17:02.615 "bdev_auto_examine": true, 00:17:02.615 "iobuf_small_cache_size": 128, 00:17:02.615 "iobuf_large_cache_size": 16 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "bdev_raid_set_options", 00:17:02.615 "params": { 00:17:02.615 "process_window_size_kb": 1024, 00:17:02.615 "process_max_bandwidth_mb_sec": 0 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "bdev_iscsi_set_options", 00:17:02.615 "params": { 00:17:02.615 "timeout_sec": 30 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "bdev_nvme_set_options", 00:17:02.615 "params": { 00:17:02.615 "action_on_timeout": "none", 00:17:02.615 "timeout_us": 0, 00:17:02.615 "timeout_admin_us": 0, 00:17:02.615 "keep_alive_timeout_ms": 10000, 00:17:02.615 "arbitration_burst": 0, 00:17:02.615 "low_priority_weight": 0, 00:17:02.615 "medium_priority_weight": 0, 00:17:02.615 "high_priority_weight": 0, 00:17:02.615 "nvme_adminq_poll_period_us": 10000, 00:17:02.615 "nvme_ioq_poll_period_us": 0, 00:17:02.615 "io_queue_requests": 512, 00:17:02.615 "delay_cmd_submit": true, 00:17:02.615 "transport_retry_count": 4, 00:17:02.615 "bdev_retry_count": 3, 00:17:02.615 "transport_ack_timeout": 0, 00:17:02.615 "ctrlr_loss_timeout_sec": 0, 00:17:02.615 "reconnect_delay_sec": 0, 00:17:02.615 "fast_io_fail_timeout_sec": 0, 00:17:02.615 "disable_auto_failback": false, 00:17:02.615 "generate_uuids": false, 00:17:02.615 "transport_tos": 0, 00:17:02.615 "nvme_error_stat": false, 00:17:02.615 "rdma_srq_size": 0, 00:17:02.615 "io_path_stat": false, 00:17:02.615 "allow_accel_sequence": false, 00:17:02.615 "rdma_max_cq_size": 0, 00:17:02.615 "rdma_cm_event_timeout_ms": 0, 00:17:02.615 "dhchap_digests": [ 00:17:02.615 "sha256", 00:17:02.615 "sha384", 00:17:02.615 "sha512" 00:17:02.615 ], 00:17:02.615 "dhchap_dhgroups": [ 00:17:02.615 "null", 00:17:02.615 "ffdhe2048", 00:17:02.615 "ffdhe3072", 00:17:02.615 "ffdhe4096", 00:17:02.615 "ffdhe6144", 00:17:02.615 "ffdhe8192" 00:17:02.615 ], 00:17:02.615 "rdma_umr_per_io": false 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "bdev_nvme_attach_controller", 00:17:02.615 "params": { 00:17:02.615 "name": "TLSTEST", 00:17:02.615 "trtype": "TCP", 00:17:02.615 "adrfam": "IPv4", 00:17:02.615 "traddr": "10.0.0.3", 00:17:02.615 "trsvcid": "4420", 00:17:02.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.615 "prchk_reftag": false, 00:17:02.615 "prchk_guard": false, 00:17:02.615 "ctrlr_loss_timeout_sec": 0, 00:17:02.615 "reconnect_delay_sec": 0, 00:17:02.615 "fast_io_fail_timeout_sec": 0, 00:17:02.615 "psk": "key0", 00:17:02.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.615 "hdgst": false, 00:17:02.615 "ddgst": false, 00:17:02.615 "multipath": "multipath" 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "bdev_nvme_set_hotplug", 00:17:02.615 "params": { 00:17:02.615 "period_us": 100000, 00:17:02.615 "enable": false 00:17:02.615 } 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "method": "bdev_wait_for_examine" 00:17:02.615 } 00:17:02.615 ] 00:17:02.615 }, 00:17:02.615 { 00:17:02.615 "subsystem": "nbd", 00:17:02.615 "config": [] 00:17:02.615 } 00:17:02.615 ] 00:17:02.615 }' 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 76695 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76695 ']' 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76695 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76695 00:17:02.615 killing process with pid 76695 00:17:02.615 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.615 00:17:02.615 Latency(us) 00:17:02.615 [2024-12-16T05:34:42.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.615 [2024-12-16T05:34:42.874Z] =================================================================================================================== 00:17:02.615 [2024-12-16T05:34:42.874Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76695' 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76695 00:17:02.615 05:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76695 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 76634 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76634 ']' 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76634 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76634 00:17:03.643 killing process with pid 76634 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76634' 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76634 00:17:03.643 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76634 00:17:04.581 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:04.581 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:04.581 "subsystems": [ 00:17:04.581 { 00:17:04.581 "subsystem": "keyring", 00:17:04.581 "config": [ 00:17:04.581 { 00:17:04.581 "method": "keyring_file_add_key", 00:17:04.581 "params": { 00:17:04.581 "name": "key0", 00:17:04.581 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:04.581 } 00:17:04.581 } 00:17:04.581 ] 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "subsystem": "iobuf", 00:17:04.581 "config": [ 00:17:04.581 { 00:17:04.581 "method": "iobuf_set_options", 00:17:04.581 "params": { 00:17:04.581 "small_pool_count": 8192, 00:17:04.581 "large_pool_count": 1024, 00:17:04.581 "small_bufsize": 8192, 00:17:04.581 "large_bufsize": 135168, 00:17:04.581 "enable_numa": false 00:17:04.581 } 00:17:04.581 } 00:17:04.581 ] 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "subsystem": "sock", 00:17:04.581 "config": [ 00:17:04.581 { 00:17:04.581 "method": "sock_set_default_impl", 00:17:04.581 "params": { 00:17:04.581 "impl_name": "uring" 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "sock_impl_set_options", 00:17:04.581 "params": { 00:17:04.581 "impl_name": "ssl", 00:17:04.581 "recv_buf_size": 4096, 00:17:04.581 "send_buf_size": 4096, 00:17:04.581 "enable_recv_pipe": true, 00:17:04.581 "enable_quickack": false, 00:17:04.581 "enable_placement_id": 0, 00:17:04.581 "enable_zerocopy_send_server": true, 00:17:04.581 "enable_zerocopy_send_client": false, 00:17:04.581 "zerocopy_threshold": 0, 00:17:04.581 "tls_version": 0, 00:17:04.581 "enable_ktls": false 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "sock_impl_set_options", 00:17:04.581 "params": { 00:17:04.581 "impl_name": "posix", 00:17:04.581 "recv_buf_size": 2097152, 00:17:04.581 "send_buf_size": 2097152, 00:17:04.581 "enable_recv_pipe": true, 00:17:04.581 "enable_quickack": false, 00:17:04.581 "enable_placement_id": 0, 00:17:04.581 "enable_zerocopy_send_server": true, 00:17:04.581 "enable_zerocopy_send_client": false, 00:17:04.581 "zerocopy_threshold": 0, 00:17:04.581 "tls_version": 0, 00:17:04.581 "enable_ktls": false 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "sock_impl_set_options", 00:17:04.581 "params": { 00:17:04.581 "impl_name": "uring", 00:17:04.581 "recv_buf_size": 2097152, 00:17:04.581 "send_buf_size": 2097152, 00:17:04.581 "enable_recv_pipe": true, 00:17:04.581 "enable_quickack": false, 00:17:04.581 "enable_placement_id": 0, 00:17:04.581 "enable_zerocopy_send_server": false, 00:17:04.581 "enable_zerocopy_send_client": false, 00:17:04.581 "zerocopy_threshold": 0, 00:17:04.581 "tls_version": 0, 00:17:04.581 "enable_ktls": false 00:17:04.581 } 00:17:04.581 } 00:17:04.581 ] 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "subsystem": "vmd", 00:17:04.581 "config": [] 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "subsystem": "accel", 00:17:04.581 "config": [ 00:17:04.581 { 00:17:04.581 "method": "accel_set_options", 00:17:04.581 "params": { 00:17:04.581 "small_cache_size": 128, 00:17:04.581 "large_cache_size": 16, 00:17:04.581 "task_count": 2048, 00:17:04.581 "sequence_count": 2048, 00:17:04.581 "buf_count": 2048 00:17:04.581 } 00:17:04.581 } 00:17:04.581 ] 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "subsystem": "bdev", 00:17:04.581 "config": [ 00:17:04.581 { 00:17:04.581 "method": "bdev_set_options", 00:17:04.581 "params": { 00:17:04.581 "bdev_io_pool_size": 65535, 00:17:04.581 "bdev_io_cache_size": 256, 00:17:04.581 "bdev_auto_examine": true, 00:17:04.581 "iobuf_small_cache_size": 128, 00:17:04.581 "iobuf_large_cache_size": 16 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "bdev_raid_set_options", 00:17:04.581 "params": { 00:17:04.581 "process_window_size_kb": 1024, 00:17:04.581 "process_max_bandwidth_mb_sec": 0 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "bdev_iscsi_set_options", 00:17:04.581 "params": { 00:17:04.581 "timeout_sec": 30 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "bdev_nvme_set_options", 00:17:04.581 "params": { 00:17:04.581 "action_on_timeout": "none", 00:17:04.581 "timeout_us": 0, 00:17:04.581 "timeout_admin_us": 0, 00:17:04.581 "keep_alive_timeout_ms": 10000, 00:17:04.581 "arbitration_burst": 0, 00:17:04.581 "low_priority_weight": 0, 00:17:04.581 "medium_priority_weight": 0, 00:17:04.581 "high_priority_weight": 0, 00:17:04.581 "nvme_adminq_poll_period_us": 10000, 00:17:04.581 "nvme_ioq_poll_period_us": 0, 00:17:04.581 "io_queue_requests": 0, 00:17:04.581 "delay_cmd_submit": true, 00:17:04.581 "transport_retry_count": 4, 00:17:04.581 "bdev_retry_count": 3, 00:17:04.581 "transport_ack_timeout": 0, 00:17:04.581 "ctrlr_loss_timeout_sec": 0, 00:17:04.581 "reconnect_delay_sec": 0, 00:17:04.581 "fast_io_fail_timeout_sec": 0, 00:17:04.581 "disable_auto_failback": false, 00:17:04.581 "generate_uuids": false, 00:17:04.581 "transport_tos": 0, 00:17:04.581 "nvme_error_stat": false, 00:17:04.581 "rdma_srq_size": 0, 00:17:04.581 "io_path_stat": false, 00:17:04.581 "allow_accel_sequence": false, 00:17:04.581 "rdma_max_cq_size": 0, 00:17:04.581 "rdma_cm_event_timeout_ms": 0, 00:17:04.581 "dhchap_digests": [ 00:17:04.581 "sha256", 00:17:04.581 "sha384", 00:17:04.581 "sha512" 00:17:04.581 ], 00:17:04.581 "dhchap_dhgroups": [ 00:17:04.581 "null", 00:17:04.581 "ffdhe2048", 00:17:04.581 "ffdhe3072", 00:17:04.581 "ffdhe4096", 00:17:04.581 "ffdhe6144", 00:17:04.581 "ffdhe8192" 00:17:04.581 ], 00:17:04.581 "rdma_umr_per_io": false 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "bdev_nvme_set_hotplug", 00:17:04.581 "params": { 00:17:04.581 "period_us": 100000, 00:17:04.581 "enable": false 00:17:04.581 } 00:17:04.581 }, 00:17:04.581 { 00:17:04.581 "method": "bdev_malloc_create", 00:17:04.582 "params": { 00:17:04.582 "name": "malloc0", 00:17:04.582 "num_blocks": 8192, 00:17:04.582 "block_size": 4096, 00:17:04.582 "physical_block_size": 4096, 00:17:04.582 "uuid": "5f913df8-692a-4d3c-b4a5-d27433f32d37", 00:17:04.582 "optimal_io_boundary": 0, 00:17:04.582 "md_size": 0, 00:17:04.582 "dif_type": 0, 00:17:04.582 "dif_is_head_of_md": false, 00:17:04.582 "dif_pi_format": 0 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "bdev_wait_for_examine" 00:17:04.582 } 00:17:04.582 ] 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "subsystem": "nbd", 00:17:04.582 "config": [] 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "subsystem": "scheduler", 00:17:04.582 "config": [ 00:17:04.582 { 00:17:04.582 "method": "framework_set_scheduler", 00:17:04.582 "params": { 00:17:04.582 "name": "static" 00:17:04.582 } 00:17:04.582 } 00:17:04.582 ] 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "subsystem": "nvmf", 00:17:04.582 "config": [ 00:17:04.582 { 00:17:04.582 "method": "nvmf_set_config", 00:17:04.582 "params": { 00:17:04.582 "discovery_filter": "match_any", 00:17:04.582 "admin_cmd_passthru": { 00:17:04.582 "identify_ctrlr": false 00:17:04.582 }, 00:17:04.582 "dhchap_digests": [ 00:17:04.582 "sha256", 00:17:04.582 "sha384", 00:17:04.582 "sha512" 00:17:04.582 ], 00:17:04.582 "dhchap_dhgroups": [ 00:17:04.582 "null", 00:17:04.582 "ffdhe2048", 00:17:04.582 "ffdhe3072", 00:17:04.582 "ffdhe4096", 00:17:04.582 "ffdhe6144", 00:17:04.582 "ffdhe8192" 00:17:04.582 ] 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "nvmf_set_max_subsystems", 00:17:04.582 "params": { 00:17:04.582 "max_subsystems": 1024 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "nvmf_set_crdt", 00:17:04.582 "params": { 00:17:04.582 "crdt1": 0, 00:17:04.582 "crdt2": 0, 00:17:04.582 "crdt3": 0 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "nvmf_create_transport", 00:17:04.582 "params": { 00:17:04.582 "trtype": "TCP", 00:17:04.582 "max_queue_depth": 128, 00:17:04.582 "max_io_qpairs_per_ctrlr": 127, 00:17:04.582 "in_capsule_data_size": 4096, 00:17:04.582 "max_io_size": 131072, 00:17:04.582 "io_unit_size": 131072, 00:17:04.582 "max_aq_depth": 128, 00:17:04.582 "num_shared_buffers": 511, 00:17:04.582 "buf_cache_size": 4294967295, 00:17:04.582 "dif_insert_or_strip": false, 00:17:04.582 "zcopy": false, 00:17:04.582 "c2h_success": false, 00:17:04.582 "sock_priority": 0, 00:17:04.582 "abort_timeout_sec": 1, 00:17:04.582 "ack_timeout": 0, 00:17:04.582 "data_wr_pool_size": 0 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "nvmf_create_subsystem", 00:17:04.582 "params": { 00:17:04.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.582 "allow_any_host": false, 00:17:04.582 "serial_number": "SPDK00000000000001", 00:17:04.582 "model_number": "SPDK bdev Controller", 00:17:04.582 "max_namespaces": 10, 00:17:04.582 "min_cntlid": 1, 00:17:04.582 "max_cntlid": 65519, 00:17:04.582 "ana_reporting": false 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "nvmf_subsystem_add_host", 00:17:04.582 "params": { 00:17:04.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.582 "host": "nqn.2016-06.io.spdk:host1", 00:17:04.582 "psk": "key0" 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "nvmf_subsystem_add_ns", 00:17:04.582 "params": { 00:17:04.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.582 "namespace": { 00:17:04.582 "nsid": 1, 00:17:04.582 "bdev_name": "malloc0", 00:17:04.582 "nguid": "5F913DF8692A4D3CB4A5D27433F32D37", 00:17:04.582 "uuid": "5f913df8-692a-4d3c-b4a5-d27433f32d37", 00:17:04.582 "no_auto_visible": false 00:17:04.582 } 00:17:04.582 } 00:17:04.582 }, 00:17:04.582 { 00:17:04.582 "method": "nvmf_subsystem_add_listener", 00:17:04.582 "params": { 00:17:04.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.582 "listen_address": { 00:17:04.582 "trtype": "TCP", 00:17:04.582 "adrfam": "IPv4", 00:17:04.582 "traddr": "10.0.0.3", 00:17:04.582 "trsvcid": "4420" 00:17:04.582 }, 00:17:04.582 "secure_channel": true 00:17:04.582 } 00:17:04.582 } 00:17:04.582 ] 00:17:04.582 } 00:17:04.582 ] 00:17:04.582 }' 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76759 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76759 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76759 ']' 00:17:04.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.582 05:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.582 [2024-12-16 05:34:44.828585] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:04.582 [2024-12-16 05:34:44.829110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.842 [2024-12-16 05:34:45.003312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.842 [2024-12-16 05:34:45.087559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.842 [2024-12-16 05:34:45.087668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.842 [2024-12-16 05:34:45.087705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.842 [2024-12-16 05:34:45.087739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.842 [2024-12-16 05:34:45.087769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.842 [2024-12-16 05:34:45.089271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.406 [2024-12-16 05:34:45.381075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:05.406 [2024-12-16 05:34:45.546506] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.406 [2024-12-16 05:34:45.578408] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.406 [2024-12-16 05:34:45.578885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=76791 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 76791 /var/tmp/bdevperf.sock 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76791 ']' 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.666 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:05.666 "subsystems": [ 00:17:05.666 { 00:17:05.666 "subsystem": "keyring", 00:17:05.666 "config": [ 00:17:05.666 { 00:17:05.666 "method": "keyring_file_add_key", 00:17:05.666 "params": { 00:17:05.666 "name": "key0", 00:17:05.666 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:05.666 } 00:17:05.666 } 00:17:05.666 ] 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "subsystem": "iobuf", 00:17:05.666 "config": [ 00:17:05.666 { 00:17:05.666 "method": "iobuf_set_options", 00:17:05.666 "params": { 00:17:05.666 "small_pool_count": 8192, 00:17:05.666 "large_pool_count": 1024, 00:17:05.666 "small_bufsize": 8192, 00:17:05.666 "large_bufsize": 135168, 00:17:05.666 "enable_numa": false 00:17:05.666 } 00:17:05.666 } 00:17:05.666 ] 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "subsystem": "sock", 00:17:05.666 "config": [ 00:17:05.666 { 00:17:05.666 "method": "sock_set_default_impl", 00:17:05.666 "params": { 00:17:05.666 "impl_name": "uring" 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "sock_impl_set_options", 00:17:05.666 "params": { 00:17:05.666 "impl_name": "ssl", 00:17:05.666 "recv_buf_size": 4096, 00:17:05.666 "send_buf_size": 4096, 00:17:05.666 "enable_recv_pipe": true, 00:17:05.666 "enable_quickack": false, 00:17:05.666 "enable_placement_id": 0, 00:17:05.666 "enable_zerocopy_send_server": true, 00:17:05.666 "enable_zerocopy_send_client": false, 00:17:05.666 "zerocopy_threshold": 0, 00:17:05.666 "tls_version": 0, 00:17:05.666 "enable_ktls": false 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "sock_impl_set_options", 00:17:05.666 "params": { 00:17:05.666 "impl_name": "posix", 00:17:05.666 "recv_buf_size": 2097152, 00:17:05.666 "send_buf_size": 2097152, 00:17:05.666 "enable_recv_pipe": true, 00:17:05.666 "enable_quickack": false, 00:17:05.666 "enable_placement_id": 0, 00:17:05.666 "enable_zerocopy_send_server": true, 00:17:05.666 "enable_zerocopy_send_client": false, 00:17:05.666 "zerocopy_threshold": 0, 00:17:05.666 "tls_version": 0, 00:17:05.666 "enable_ktls": false 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "sock_impl_set_options", 00:17:05.666 "params": { 00:17:05.666 "impl_name": "uring", 00:17:05.666 "recv_buf_size": 2097152, 00:17:05.666 "send_buf_size": 2097152, 00:17:05.666 "enable_recv_pipe": true, 00:17:05.666 "enable_quickack": false, 00:17:05.666 "enable_placement_id": 0, 00:17:05.666 "enable_zerocopy_send_server": false, 00:17:05.666 "enable_zerocopy_send_client": false, 00:17:05.666 "zerocopy_threshold": 0, 00:17:05.666 "tls_version": 0, 00:17:05.666 "enable_ktls": false 00:17:05.666 } 00:17:05.666 } 00:17:05.666 ] 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "subsystem": "vmd", 00:17:05.666 "config": [] 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "subsystem": "accel", 00:17:05.666 "config": [ 00:17:05.666 { 00:17:05.666 "method": "accel_set_options", 00:17:05.666 "params": { 00:17:05.666 "small_cache_size": 128, 00:17:05.666 "large_cache_size": 16, 00:17:05.666 "task_count": 2048, 00:17:05.666 "sequence_count": 2048, 00:17:05.666 "buf_count": 2048 00:17:05.666 } 00:17:05.666 } 00:17:05.666 ] 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "subsystem": "bdev", 00:17:05.666 "config": [ 00:17:05.666 { 00:17:05.666 "method": "bdev_set_options", 00:17:05.666 "params": { 00:17:05.666 "bdev_io_pool_size": 65535, 00:17:05.666 "bdev_io_cache_size": 256, 00:17:05.666 "bdev_auto_examine": true, 00:17:05.666 "iobuf_small_cache_size": 128, 00:17:05.666 "iobuf_large_cache_size": 16 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "bdev_raid_set_options", 00:17:05.666 "params": { 00:17:05.666 "process_window_size_kb": 1024, 00:17:05.666 "process_max_bandwidth_mb_sec": 0 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "bdev_iscsi_set_options", 00:17:05.666 "params": { 00:17:05.666 "timeout_sec": 30 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "bdev_nvme_set_options", 00:17:05.666 "params": { 00:17:05.666 "action_on_timeout": "none", 00:17:05.666 "timeout_us": 0, 00:17:05.666 "timeout_admin_us": 0, 00:17:05.666 "keep_alive_timeout_ms": 10000, 00:17:05.666 "arbitration_burst": 0, 00:17:05.666 "low_priority_weight": 0, 00:17:05.666 "medium_priority_weight": 0, 00:17:05.666 "high_priority_weight": 0, 00:17:05.666 "nvme_adminq_poll_period_us": 10000, 00:17:05.666 "nvme_ioq_poll_period_us": 0, 00:17:05.666 "io_queue_requests": 512, 00:17:05.666 "delay_cmd_submit": true, 00:17:05.666 "transport_retry_count": 4, 00:17:05.666 "bdev_retry_count": 3, 00:17:05.666 "transport_ack_timeout": 0, 00:17:05.666 "ctrlr_loss_timeout_sec": 0, 00:17:05.666 "reconnect_delay_sec": 0, 00:17:05.666 "fast_io_fail_timeout_sec": 0, 00:17:05.666 "disable_auto_failback": false, 00:17:05.666 "generate_uuids": false, 00:17:05.666 "transport_tos": 0, 00:17:05.666 "nvme_error_stat": false, 00:17:05.666 "rdma_srq_size": 0, 00:17:05.666 "io_path_stat": false, 00:17:05.666 "allow_accel_sequence": false, 00:17:05.666 "rdma_max_cq_size": 0, 00:17:05.666 "rdma_cm_event_timeout_ms": 0, 00:17:05.666 "dhchap_digests": [ 00:17:05.666 "sha256", 00:17:05.666 "sha384", 00:17:05.666 "sha512" 00:17:05.666 ], 00:17:05.666 "dhchap_dhgroups": [ 00:17:05.666 "null", 00:17:05.666 "ffdhe2048", 00:17:05.666 "ffdhe3072", 00:17:05.666 "ffdhe4096", 00:17:05.666 "ffdhe6144", 00:17:05.666 "ffdhe8192" 00:17:05.666 ], 00:17:05.666 "rdma_umr_per_io": false 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "bdev_nvme_attach_controller", 00:17:05.666 "params": { 00:17:05.666 "name": "TLSTEST", 00:17:05.666 "trtype": "TCP", 00:17:05.666 "adrfam": "IPv4", 00:17:05.666 "traddr": "10.0.0.3", 00:17:05.666 "trsvcid": "4420", 00:17:05.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.666 "prchk_reftag": false, 00:17:05.666 "prchk_guard": false, 00:17:05.666 "ctrlr_loss_timeout_sec": 0, 00:17:05.666 "reconnect_delay_sec": 0, 00:17:05.666 "fast_io_fail_timeout_sec": 0, 00:17:05.666 "psk": "key0", 00:17:05.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.666 "hdgst": false, 00:17:05.666 "ddgst": false, 00:17:05.666 "multipath": "multipath" 00:17:05.666 } 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "method": "bdev_nvme_set_hotplug", 00:17:05.666 "params": { 00:17:05.667 "period_us": 100000, 00:17:05.667 "enable": false 00:17:05.667 } 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "method": "bdev_wait_for_examine" 00:17:05.667 } 00:17:05.667 ] 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "subsystem": "nbd", 00:17:05.667 "config": [] 00:17:05.667 } 00:17:05.667 ] 00:17:05.667 }' 00:17:05.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.667 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.667 05:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.667 [2024-12-16 05:34:45.917423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:05.667 [2024-12-16 05:34:45.918368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76791 ] 00:17:05.925 [2024-12-16 05:34:46.099642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.184 [2024-12-16 05:34:46.198650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.443 [2024-12-16 05:34:46.449644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:06.443 [2024-12-16 05:34:46.558348] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.702 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.702 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:06.702 05:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:06.702 Running I/O for 10 seconds... 00:17:09.016 2944.00 IOPS, 11.50 MiB/s [2024-12-16T05:34:50.212Z] 2974.50 IOPS, 11.62 MiB/s [2024-12-16T05:34:51.149Z] 2977.00 IOPS, 11.63 MiB/s [2024-12-16T05:34:52.085Z] 2966.75 IOPS, 11.59 MiB/s [2024-12-16T05:34:53.022Z] 2968.80 IOPS, 11.60 MiB/s [2024-12-16T05:34:53.958Z] 2966.67 IOPS, 11.59 MiB/s [2024-12-16T05:34:55.335Z] 2966.43 IOPS, 11.59 MiB/s [2024-12-16T05:34:56.272Z] 2965.75 IOPS, 11.58 MiB/s [2024-12-16T05:34:57.208Z] 2967.22 IOPS, 11.59 MiB/s [2024-12-16T05:34:57.208Z] 2968.90 IOPS, 11.60 MiB/s 00:17:16.949 Latency(us) 00:17:16.949 [2024-12-16T05:34:57.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.949 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:16.949 Verification LBA range: start 0x0 length 0x2000 00:17:16.949 TLSTESTn1 : 10.02 2974.61 11.62 0.00 0.00 42951.56 6851.49 43134.60 00:17:16.949 [2024-12-16T05:34:57.208Z] =================================================================================================================== 00:17:16.949 [2024-12-16T05:34:57.208Z] Total : 2974.61 11.62 0.00 0.00 42951.56 6851.49 43134.60 00:17:16.949 { 00:17:16.949 "results": [ 00:17:16.949 { 00:17:16.949 "job": "TLSTESTn1", 00:17:16.949 "core_mask": "0x4", 00:17:16.949 "workload": "verify", 00:17:16.949 "status": "finished", 00:17:16.949 "verify_range": { 00:17:16.949 "start": 0, 00:17:16.949 "length": 8192 00:17:16.949 }, 00:17:16.949 "queue_depth": 128, 00:17:16.949 "io_size": 4096, 00:17:16.949 "runtime": 10.022161, 00:17:16.949 "iops": 2974.607971274858, 00:17:16.949 "mibps": 11.619562387792413, 00:17:16.949 "io_failed": 0, 00:17:16.949 "io_timeout": 0, 00:17:16.949 "avg_latency_us": 42951.558406011005, 00:17:16.949 "min_latency_us": 6851.490909090909, 00:17:16.949 "max_latency_us": 43134.60363636364 00:17:16.949 } 00:17:16.949 ], 00:17:16.949 "core_count": 1 00:17:16.949 } 00:17:16.949 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.949 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 76791 00:17:16.949 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76791 ']' 00:17:16.949 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76791 00:17:16.949 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:16.949 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.949 05:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76791 00:17:16.949 killing process with pid 76791 00:17:16.949 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.949 00:17:16.949 Latency(us) 00:17:16.949 [2024-12-16T05:34:57.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.949 [2024-12-16T05:34:57.208Z] =================================================================================================================== 00:17:16.949 [2024-12-16T05:34:57.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.949 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:16.949 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:16.949 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76791' 00:17:16.949 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76791 00:17:16.949 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76791 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 76759 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76759 ']' 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76759 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76759 00:17:17.886 killing process with pid 76759 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76759' 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76759 00:17:17.886 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76759 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76948 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76948 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 76948 ']' 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.264 05:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.264 [2024-12-16 05:34:59.245521] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:19.264 [2024-12-16 05:34:59.245753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.264 [2024-12-16 05:34:59.438944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.523 [2024-12-16 05:34:59.569695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.523 [2024-12-16 05:34:59.569774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.523 [2024-12-16 05:34:59.569812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.523 [2024-12-16 05:34:59.569841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.523 [2024-12-16 05:34:59.569858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.523 [2024-12-16 05:34:59.571317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.523 [2024-12-16 05:34:59.761350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.LAIyw6BYMS 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LAIyw6BYMS 00:17:20.091 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:20.350 [2024-12-16 05:35:00.499056] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.350 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:20.609 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:20.870 [2024-12-16 05:35:01.047221] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.870 [2024-12-16 05:35:01.047667] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:20.870 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:21.164 malloc0 00:17:21.164 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:21.434 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:17:21.693 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=77005 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 77005 /var/tmp/bdevperf.sock 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77005 ']' 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.952 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.211 [2024-12-16 05:35:02.255864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:22.211 [2024-12-16 05:35:02.256249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77005 ] 00:17:22.211 [2024-12-16 05:35:02.434560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.470 [2024-12-16 05:35:02.559383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.728 [2024-12-16 05:35:02.749158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.987 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.987 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:22.987 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:17:23.246 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:23.505 [2024-12-16 05:35:03.721687] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.764 nvme0n1 00:17:23.764 05:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:23.764 Running I/O for 1 seconds... 00:17:24.958 2688.00 IOPS, 10.50 MiB/s 00:17:24.958 Latency(us) 00:17:24.958 [2024-12-16T05:35:05.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.958 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:24.958 Verification LBA range: start 0x0 length 0x2000 00:17:24.958 nvme0n1 : 1.04 2708.64 10.58 0.00 0.00 46542.51 12630.57 31695.59 00:17:24.958 [2024-12-16T05:35:05.217Z] =================================================================================================================== 00:17:24.958 [2024-12-16T05:35:05.217Z] Total : 2708.64 10.58 0.00 0.00 46542.51 12630.57 31695.59 00:17:24.958 { 00:17:24.958 "results": [ 00:17:24.958 { 00:17:24.958 "job": "nvme0n1", 00:17:24.958 "core_mask": "0x2", 00:17:24.958 "workload": "verify", 00:17:24.958 "status": "finished", 00:17:24.958 "verify_range": { 00:17:24.958 "start": 0, 00:17:24.958 "length": 8192 00:17:24.958 }, 00:17:24.958 "queue_depth": 128, 00:17:24.958 "io_size": 4096, 00:17:24.958 "runtime": 1.039638, 00:17:24.958 "iops": 2708.635121070988, 00:17:24.958 "mibps": 10.580605941683547, 00:17:24.958 "io_failed": 0, 00:17:24.958 "io_timeout": 0, 00:17:24.958 "avg_latency_us": 46542.51371900827, 00:17:24.958 "min_latency_us": 12630.574545454545, 00:17:24.958 "max_latency_us": 31695.592727272728 00:17:24.958 } 00:17:24.958 ], 00:17:24.958 "core_count": 1 00:17:24.958 } 00:17:24.958 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 77005 00:17:24.958 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77005 ']' 00:17:24.959 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77005 00:17:24.959 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:24.959 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.959 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77005 00:17:24.959 killing process with pid 77005 00:17:24.959 Received shutdown signal, test time was about 1.000000 seconds 00:17:24.959 00:17:24.959 Latency(us) 00:17:24.959 [2024-12-16T05:35:05.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.959 [2024-12-16T05:35:05.218Z] =================================================================================================================== 00:17:24.959 [2024-12-16T05:35:05.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.959 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:24.959 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:24.959 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77005' 00:17:24.959 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77005 00:17:24.959 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77005 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 76948 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 76948 ']' 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 76948 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76948 00:17:25.893 killing process with pid 76948 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76948' 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 76948 00:17:25.893 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 76948 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=77069 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 77069 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77069 ']' 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.828 05:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.828 [2024-12-16 05:35:06.896961] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:26.828 [2024-12-16 05:35:06.897383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.086 [2024-12-16 05:35:07.087481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.086 [2024-12-16 05:35:07.209890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.086 [2024-12-16 05:35:07.209965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.086 [2024-12-16 05:35:07.209990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.086 [2024-12-16 05:35:07.210020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.086 [2024-12-16 05:35:07.210037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.086 [2024-12-16 05:35:07.211455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.345 [2024-12-16 05:35:07.381197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.603 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.603 [2024-12-16 05:35:07.798238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.603 malloc0 00:17:27.603 [2024-12-16 05:35:07.851243] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:27.603 [2024-12-16 05:35:07.851521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=77108 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 77108 /var/tmp/bdevperf.sock 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77108 ']' 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.862 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.862 [2024-12-16 05:35:07.993294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:27.862 [2024-12-16 05:35:07.993732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77108 ] 00:17:28.120 [2024-12-16 05:35:08.179889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.120 [2024-12-16 05:35:08.302559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.378 [2024-12-16 05:35:08.456066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.945 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.945 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:28.945 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LAIyw6BYMS 00:17:28.945 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:29.203 [2024-12-16 05:35:09.386498] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.461 nvme0n1 00:17:29.461 05:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:29.461 Running I/O for 1 seconds... 00:17:30.395 3200.00 IOPS, 12.50 MiB/s 00:17:30.395 Latency(us) 00:17:30.395 [2024-12-16T05:35:10.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.395 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:30.395 Verification LBA range: start 0x0 length 0x2000 00:17:30.395 nvme0n1 : 1.02 3262.18 12.74 0.00 0.00 38783.82 6881.28 26452.71 00:17:30.395 [2024-12-16T05:35:10.654Z] =================================================================================================================== 00:17:30.395 [2024-12-16T05:35:10.654Z] Total : 3262.18 12.74 0.00 0.00 38783.82 6881.28 26452.71 00:17:30.395 { 00:17:30.395 "results": [ 00:17:30.395 { 00:17:30.395 "job": "nvme0n1", 00:17:30.395 "core_mask": "0x2", 00:17:30.395 "workload": "verify", 00:17:30.395 "status": "finished", 00:17:30.395 "verify_range": { 00:17:30.395 "start": 0, 00:17:30.395 "length": 8192 00:17:30.395 }, 00:17:30.395 "queue_depth": 128, 00:17:30.395 "io_size": 4096, 00:17:30.395 "runtime": 1.020178, 00:17:30.395 "iops": 3262.175816377142, 00:17:30.395 "mibps": 12.74287428272321, 00:17:30.395 "io_failed": 0, 00:17:30.395 "io_timeout": 0, 00:17:30.395 "avg_latency_us": 38783.82097902098, 00:17:30.395 "min_latency_us": 6881.28, 00:17:30.395 "max_latency_us": 26452.712727272727 00:17:30.395 } 00:17:30.395 ], 00:17:30.395 "core_count": 1 00:17:30.395 } 00:17:30.654 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:30.654 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.654 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.654 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.654 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:30.654 "subsystems": [ 00:17:30.654 { 00:17:30.654 "subsystem": "keyring", 00:17:30.654 "config": [ 00:17:30.654 { 00:17:30.654 "method": "keyring_file_add_key", 00:17:30.654 "params": { 00:17:30.654 "name": "key0", 00:17:30.654 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:30.654 } 00:17:30.654 } 00:17:30.654 ] 00:17:30.654 }, 00:17:30.654 { 00:17:30.654 "subsystem": "iobuf", 00:17:30.654 "config": [ 00:17:30.654 { 00:17:30.654 "method": "iobuf_set_options", 00:17:30.654 "params": { 00:17:30.654 "small_pool_count": 8192, 00:17:30.654 "large_pool_count": 1024, 00:17:30.654 "small_bufsize": 8192, 00:17:30.654 "large_bufsize": 135168, 00:17:30.654 "enable_numa": false 00:17:30.654 } 00:17:30.654 } 00:17:30.654 ] 00:17:30.654 }, 00:17:30.654 { 00:17:30.654 "subsystem": "sock", 00:17:30.654 "config": [ 00:17:30.654 { 00:17:30.654 "method": "sock_set_default_impl", 00:17:30.654 "params": { 00:17:30.654 "impl_name": "uring" 00:17:30.654 } 00:17:30.654 }, 00:17:30.654 { 00:17:30.654 "method": "sock_impl_set_options", 00:17:30.654 "params": { 00:17:30.654 "impl_name": "ssl", 00:17:30.654 "recv_buf_size": 4096, 00:17:30.654 "send_buf_size": 4096, 00:17:30.654 "enable_recv_pipe": true, 00:17:30.654 "enable_quickack": false, 00:17:30.654 "enable_placement_id": 0, 00:17:30.654 "enable_zerocopy_send_server": true, 00:17:30.654 "enable_zerocopy_send_client": false, 00:17:30.654 "zerocopy_threshold": 0, 00:17:30.654 "tls_version": 0, 00:17:30.654 "enable_ktls": false 00:17:30.654 } 00:17:30.654 }, 00:17:30.654 { 00:17:30.654 "method": "sock_impl_set_options", 00:17:30.654 "params": { 00:17:30.654 "impl_name": "posix", 00:17:30.654 "recv_buf_size": 2097152, 00:17:30.654 "send_buf_size": 2097152, 00:17:30.654 "enable_recv_pipe": true, 00:17:30.654 "enable_quickack": false, 00:17:30.654 "enable_placement_id": 0, 00:17:30.654 "enable_zerocopy_send_server": true, 00:17:30.654 "enable_zerocopy_send_client": false, 00:17:30.654 "zerocopy_threshold": 0, 00:17:30.654 "tls_version": 0, 00:17:30.654 "enable_ktls": false 00:17:30.654 } 00:17:30.654 }, 00:17:30.654 { 00:17:30.654 "method": "sock_impl_set_options", 00:17:30.654 "params": { 00:17:30.654 "impl_name": "uring", 00:17:30.654 "recv_buf_size": 2097152, 00:17:30.654 "send_buf_size": 2097152, 00:17:30.654 "enable_recv_pipe": true, 00:17:30.654 "enable_quickack": false, 00:17:30.654 "enable_placement_id": 0, 00:17:30.654 "enable_zerocopy_send_server": false, 00:17:30.654 "enable_zerocopy_send_client": false, 00:17:30.654 "zerocopy_threshold": 0, 00:17:30.654 "tls_version": 0, 00:17:30.654 "enable_ktls": false 00:17:30.654 } 00:17:30.654 } 00:17:30.654 ] 00:17:30.654 }, 00:17:30.654 { 00:17:30.654 "subsystem": "vmd", 00:17:30.654 "config": [] 00:17:30.654 }, 00:17:30.654 { 00:17:30.654 "subsystem": "accel", 00:17:30.654 "config": [ 00:17:30.654 { 00:17:30.654 "method": "accel_set_options", 00:17:30.654 "params": { 00:17:30.654 "small_cache_size": 128, 00:17:30.654 "large_cache_size": 16, 00:17:30.654 "task_count": 2048, 00:17:30.654 "sequence_count": 2048, 00:17:30.654 "buf_count": 2048 00:17:30.654 } 00:17:30.655 } 00:17:30.655 ] 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "subsystem": "bdev", 00:17:30.655 "config": [ 00:17:30.655 { 00:17:30.655 "method": "bdev_set_options", 00:17:30.655 "params": { 00:17:30.655 "bdev_io_pool_size": 65535, 00:17:30.655 "bdev_io_cache_size": 256, 00:17:30.655 "bdev_auto_examine": true, 00:17:30.655 "iobuf_small_cache_size": 128, 00:17:30.655 "iobuf_large_cache_size": 16 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "bdev_raid_set_options", 00:17:30.655 "params": { 00:17:30.655 "process_window_size_kb": 1024, 00:17:30.655 "process_max_bandwidth_mb_sec": 0 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "bdev_iscsi_set_options", 00:17:30.655 "params": { 00:17:30.655 "timeout_sec": 30 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "bdev_nvme_set_options", 00:17:30.655 "params": { 00:17:30.655 "action_on_timeout": "none", 00:17:30.655 "timeout_us": 0, 00:17:30.655 "timeout_admin_us": 0, 00:17:30.655 "keep_alive_timeout_ms": 10000, 00:17:30.655 "arbitration_burst": 0, 00:17:30.655 "low_priority_weight": 0, 00:17:30.655 "medium_priority_weight": 0, 00:17:30.655 "high_priority_weight": 0, 00:17:30.655 "nvme_adminq_poll_period_us": 10000, 00:17:30.655 "nvme_ioq_poll_period_us": 0, 00:17:30.655 "io_queue_requests": 0, 00:17:30.655 "delay_cmd_submit": true, 00:17:30.655 "transport_retry_count": 4, 00:17:30.655 "bdev_retry_count": 3, 00:17:30.655 "transport_ack_timeout": 0, 00:17:30.655 "ctrlr_loss_timeout_sec": 0, 00:17:30.655 "reconnect_delay_sec": 0, 00:17:30.655 "fast_io_fail_timeout_sec": 0, 00:17:30.655 "disable_auto_failback": false, 00:17:30.655 "generate_uuids": false, 00:17:30.655 "transport_tos": 0, 00:17:30.655 "nvme_error_stat": false, 00:17:30.655 "rdma_srq_size": 0, 00:17:30.655 "io_path_stat": false, 00:17:30.655 "allow_accel_sequence": false, 00:17:30.655 "rdma_max_cq_size": 0, 00:17:30.655 "rdma_cm_event_timeout_ms": 0, 00:17:30.655 "dhchap_digests": [ 00:17:30.655 "sha256", 00:17:30.655 "sha384", 00:17:30.655 "sha512" 00:17:30.655 ], 00:17:30.655 "dhchap_dhgroups": [ 00:17:30.655 "null", 00:17:30.655 "ffdhe2048", 00:17:30.655 "ffdhe3072", 00:17:30.655 "ffdhe4096", 00:17:30.655 "ffdhe6144", 00:17:30.655 "ffdhe8192" 00:17:30.655 ], 00:17:30.655 "rdma_umr_per_io": false 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "bdev_nvme_set_hotplug", 00:17:30.655 "params": { 00:17:30.655 "period_us": 100000, 00:17:30.655 "enable": false 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "bdev_malloc_create", 00:17:30.655 "params": { 00:17:30.655 "name": "malloc0", 00:17:30.655 "num_blocks": 8192, 00:17:30.655 "block_size": 4096, 00:17:30.655 "physical_block_size": 4096, 00:17:30.655 "uuid": "8e7663b0-fb26-45b7-88b2-b7d3c0f73f9e", 00:17:30.655 "optimal_io_boundary": 0, 00:17:30.655 "md_size": 0, 00:17:30.655 "dif_type": 0, 00:17:30.655 "dif_is_head_of_md": false, 00:17:30.655 "dif_pi_format": 0 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "bdev_wait_for_examine" 00:17:30.655 } 00:17:30.655 ] 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "subsystem": "nbd", 00:17:30.655 "config": [] 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "subsystem": "scheduler", 00:17:30.655 "config": [ 00:17:30.655 { 00:17:30.655 "method": "framework_set_scheduler", 00:17:30.655 "params": { 00:17:30.655 "name": "static" 00:17:30.655 } 00:17:30.655 } 00:17:30.655 ] 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "subsystem": "nvmf", 00:17:30.655 "config": [ 00:17:30.655 { 00:17:30.655 "method": "nvmf_set_config", 00:17:30.655 "params": { 00:17:30.655 "discovery_filter": "match_any", 00:17:30.655 "admin_cmd_passthru": { 00:17:30.655 "identify_ctrlr": false 00:17:30.655 }, 00:17:30.655 "dhchap_digests": [ 00:17:30.655 "sha256", 00:17:30.655 "sha384", 00:17:30.655 "sha512" 00:17:30.655 ], 00:17:30.655 "dhchap_dhgroups": [ 00:17:30.655 "null", 00:17:30.655 "ffdhe2048", 00:17:30.655 "ffdhe3072", 00:17:30.655 "ffdhe4096", 00:17:30.655 "ffdhe6144", 00:17:30.655 "ffdhe8192" 00:17:30.655 ] 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "nvmf_set_max_subsystems", 00:17:30.655 "params": { 00:17:30.655 "max_subsystems": 1024 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "nvmf_set_crdt", 00:17:30.655 "params": { 00:17:30.655 "crdt1": 0, 00:17:30.655 "crdt2": 0, 00:17:30.655 "crdt3": 0 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "nvmf_create_transport", 00:17:30.655 "params": { 00:17:30.655 "trtype": "TCP", 00:17:30.655 "max_queue_depth": 128, 00:17:30.655 "max_io_qpairs_per_ctrlr": 127, 00:17:30.655 "in_capsule_data_size": 4096, 00:17:30.655 "max_io_size": 131072, 00:17:30.655 "io_unit_size": 131072, 00:17:30.655 "max_aq_depth": 128, 00:17:30.655 "num_shared_buffers": 511, 00:17:30.655 "buf_cache_size": 4294967295, 00:17:30.655 "dif_insert_or_strip": false, 00:17:30.655 "zcopy": false, 00:17:30.655 "c2h_success": false, 00:17:30.655 "sock_priority": 0, 00:17:30.655 "abort_timeout_sec": 1, 00:17:30.655 "ack_timeout": 0, 00:17:30.655 "data_wr_pool_size": 0 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "nvmf_create_subsystem", 00:17:30.655 "params": { 00:17:30.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.655 "allow_any_host": false, 00:17:30.655 "serial_number": "00000000000000000000", 00:17:30.655 "model_number": "SPDK bdev Controller", 00:17:30.655 "max_namespaces": 32, 00:17:30.655 "min_cntlid": 1, 00:17:30.655 "max_cntlid": 65519, 00:17:30.655 "ana_reporting": false 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "nvmf_subsystem_add_host", 00:17:30.655 "params": { 00:17:30.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.655 "host": "nqn.2016-06.io.spdk:host1", 00:17:30.655 "psk": "key0" 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "nvmf_subsystem_add_ns", 00:17:30.655 "params": { 00:17:30.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.655 "namespace": { 00:17:30.655 "nsid": 1, 00:17:30.655 "bdev_name": "malloc0", 00:17:30.655 "nguid": "8E7663B0FB2645B788B2B7D3C0F73F9E", 00:17:30.655 "uuid": "8e7663b0-fb26-45b7-88b2-b7d3c0f73f9e", 00:17:30.655 "no_auto_visible": false 00:17:30.655 } 00:17:30.655 } 00:17:30.655 }, 00:17:30.655 { 00:17:30.655 "method": "nvmf_subsystem_add_listener", 00:17:30.655 "params": { 00:17:30.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.655 "listen_address": { 00:17:30.655 "trtype": "TCP", 00:17:30.655 "adrfam": "IPv4", 00:17:30.655 "traddr": "10.0.0.3", 00:17:30.655 "trsvcid": "4420" 00:17:30.655 }, 00:17:30.655 "secure_channel": false, 00:17:30.655 "sock_impl": "ssl" 00:17:30.655 } 00:17:30.655 } 00:17:30.655 ] 00:17:30.655 } 00:17:30.655 ] 00:17:30.655 }' 00:17:30.655 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:30.914 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:30.915 "subsystems": [ 00:17:30.915 { 00:17:30.915 "subsystem": "keyring", 00:17:30.915 "config": [ 00:17:30.915 { 00:17:30.915 "method": "keyring_file_add_key", 00:17:30.915 "params": { 00:17:30.915 "name": "key0", 00:17:30.915 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:30.915 } 00:17:30.915 } 00:17:30.915 ] 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "subsystem": "iobuf", 00:17:30.915 "config": [ 00:17:30.915 { 00:17:30.915 "method": "iobuf_set_options", 00:17:30.915 "params": { 00:17:30.915 "small_pool_count": 8192, 00:17:30.915 "large_pool_count": 1024, 00:17:30.915 "small_bufsize": 8192, 00:17:30.915 "large_bufsize": 135168, 00:17:30.915 "enable_numa": false 00:17:30.915 } 00:17:30.915 } 00:17:30.915 ] 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "subsystem": "sock", 00:17:30.915 "config": [ 00:17:30.915 { 00:17:30.915 "method": "sock_set_default_impl", 00:17:30.915 "params": { 00:17:30.915 "impl_name": "uring" 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "sock_impl_set_options", 00:17:30.915 "params": { 00:17:30.915 "impl_name": "ssl", 00:17:30.915 "recv_buf_size": 4096, 00:17:30.915 "send_buf_size": 4096, 00:17:30.915 "enable_recv_pipe": true, 00:17:30.915 "enable_quickack": false, 00:17:30.915 "enable_placement_id": 0, 00:17:30.915 "enable_zerocopy_send_server": true, 00:17:30.915 "enable_zerocopy_send_client": false, 00:17:30.915 "zerocopy_threshold": 0, 00:17:30.915 "tls_version": 0, 00:17:30.915 "enable_ktls": false 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "sock_impl_set_options", 00:17:30.915 "params": { 00:17:30.915 "impl_name": "posix", 00:17:30.915 "recv_buf_size": 2097152, 00:17:30.915 "send_buf_size": 2097152, 00:17:30.915 "enable_recv_pipe": true, 00:17:30.915 "enable_quickack": false, 00:17:30.915 "enable_placement_id": 0, 00:17:30.915 "enable_zerocopy_send_server": true, 00:17:30.915 "enable_zerocopy_send_client": false, 00:17:30.915 "zerocopy_threshold": 0, 00:17:30.915 "tls_version": 0, 00:17:30.915 "enable_ktls": false 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "sock_impl_set_options", 00:17:30.915 "params": { 00:17:30.915 "impl_name": "uring", 00:17:30.915 "recv_buf_size": 2097152, 00:17:30.915 "send_buf_size": 2097152, 00:17:30.915 "enable_recv_pipe": true, 00:17:30.915 "enable_quickack": false, 00:17:30.915 "enable_placement_id": 0, 00:17:30.915 "enable_zerocopy_send_server": false, 00:17:30.915 "enable_zerocopy_send_client": false, 00:17:30.915 "zerocopy_threshold": 0, 00:17:30.915 "tls_version": 0, 00:17:30.915 "enable_ktls": false 00:17:30.915 } 00:17:30.915 } 00:17:30.915 ] 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "subsystem": "vmd", 00:17:30.915 "config": [] 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "subsystem": "accel", 00:17:30.915 "config": [ 00:17:30.915 { 00:17:30.915 "method": "accel_set_options", 00:17:30.915 "params": { 00:17:30.915 "small_cache_size": 128, 00:17:30.915 "large_cache_size": 16, 00:17:30.915 "task_count": 2048, 00:17:30.915 "sequence_count": 2048, 00:17:30.915 "buf_count": 2048 00:17:30.915 } 00:17:30.915 } 00:17:30.915 ] 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "subsystem": "bdev", 00:17:30.915 "config": [ 00:17:30.915 { 00:17:30.915 "method": "bdev_set_options", 00:17:30.915 "params": { 00:17:30.915 "bdev_io_pool_size": 65535, 00:17:30.915 "bdev_io_cache_size": 256, 00:17:30.915 "bdev_auto_examine": true, 00:17:30.915 "iobuf_small_cache_size": 128, 00:17:30.915 "iobuf_large_cache_size": 16 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "bdev_raid_set_options", 00:17:30.915 "params": { 00:17:30.915 "process_window_size_kb": 1024, 00:17:30.915 "process_max_bandwidth_mb_sec": 0 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "bdev_iscsi_set_options", 00:17:30.915 "params": { 00:17:30.915 "timeout_sec": 30 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "bdev_nvme_set_options", 00:17:30.915 "params": { 00:17:30.915 "action_on_timeout": "none", 00:17:30.915 "timeout_us": 0, 00:17:30.915 "timeout_admin_us": 0, 00:17:30.915 "keep_alive_timeout_ms": 10000, 00:17:30.915 "arbitration_burst": 0, 00:17:30.915 "low_priority_weight": 0, 00:17:30.915 "medium_priority_weight": 0, 00:17:30.915 "high_priority_weight": 0, 00:17:30.915 "nvme_adminq_poll_period_us": 10000, 00:17:30.915 "nvme_ioq_poll_period_us": 0, 00:17:30.915 "io_queue_requests": 512, 00:17:30.915 "delay_cmd_submit": true, 00:17:30.915 "transport_retry_count": 4, 00:17:30.915 "bdev_retry_count": 3, 00:17:30.915 "transport_ack_timeout": 0, 00:17:30.915 "ctrlr_loss_timeout_sec": 0, 00:17:30.915 "reconnect_delay_sec": 0, 00:17:30.915 "fast_io_fail_timeout_sec": 0, 00:17:30.915 "disable_auto_failback": false, 00:17:30.915 "generate_uuids": false, 00:17:30.915 "transport_tos": 0, 00:17:30.915 "nvme_error_stat": false, 00:17:30.915 "rdma_srq_size": 0, 00:17:30.915 "io_path_stat": false, 00:17:30.915 "allow_accel_sequence": false, 00:17:30.915 "rdma_max_cq_size": 0, 00:17:30.915 "rdma_cm_event_timeout_ms": 0, 00:17:30.915 "dhchap_digests": [ 00:17:30.915 "sha256", 00:17:30.915 "sha384", 00:17:30.915 "sha512" 00:17:30.915 ], 00:17:30.915 "dhchap_dhgroups": [ 00:17:30.915 "null", 00:17:30.915 "ffdhe2048", 00:17:30.915 "ffdhe3072", 00:17:30.915 "ffdhe4096", 00:17:30.915 "ffdhe6144", 00:17:30.915 "ffdhe8192" 00:17:30.915 ], 00:17:30.915 "rdma_umr_per_io": false 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "bdev_nvme_attach_controller", 00:17:30.915 "params": { 00:17:30.915 "name": "nvme0", 00:17:30.915 "trtype": "TCP", 00:17:30.915 "adrfam": "IPv4", 00:17:30.915 "traddr": "10.0.0.3", 00:17:30.915 "trsvcid": "4420", 00:17:30.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.915 "prchk_reftag": false, 00:17:30.915 "prchk_guard": false, 00:17:30.915 "ctrlr_loss_timeout_sec": 0, 00:17:30.915 "reconnect_delay_sec": 0, 00:17:30.915 "fast_io_fail_timeout_sec": 0, 00:17:30.915 "psk": "key0", 00:17:30.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.915 "hdgst": false, 00:17:30.915 "ddgst": false, 00:17:30.915 "multipath": "multipath" 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "bdev_nvme_set_hotplug", 00:17:30.915 "params": { 00:17:30.915 "period_us": 100000, 00:17:30.915 "enable": false 00:17:30.915 } 00:17:30.915 }, 00:17:30.915 { 00:17:30.915 "method": "bdev_enable_histogram", 00:17:30.915 "params": { 00:17:30.915 "name": "nvme0n1", 00:17:30.915 "enable": true 00:17:30.916 } 00:17:30.916 }, 00:17:30.916 { 00:17:30.916 "method": "bdev_wait_for_examine" 00:17:30.916 } 00:17:30.916 ] 00:17:30.916 }, 00:17:30.916 { 00:17:30.916 "subsystem": "nbd", 00:17:30.916 "config": [] 00:17:30.916 } 00:17:30.916 ] 00:17:30.916 }' 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 77108 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77108 ']' 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77108 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77108 00:17:30.916 killing process with pid 77108 00:17:30.916 Received shutdown signal, test time was about 1.000000 seconds 00:17:30.916 00:17:30.916 Latency(us) 00:17:30.916 [2024-12-16T05:35:11.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.916 [2024-12-16T05:35:11.175Z] =================================================================================================================== 00:17:30.916 [2024-12-16T05:35:11.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77108' 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77108 00:17:30.916 05:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77108 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 77069 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77069 ']' 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77069 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77069 00:17:31.852 killing process with pid 77069 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77069' 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77069 00:17:31.852 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77069 00:17:32.815 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:32.815 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.815 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.815 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:32.815 "subsystems": [ 00:17:32.815 { 00:17:32.815 "subsystem": "keyring", 00:17:32.815 "config": [ 00:17:32.815 { 00:17:32.815 "method": "keyring_file_add_key", 00:17:32.815 "params": { 00:17:32.815 "name": "key0", 00:17:32.815 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:32.815 } 00:17:32.815 } 00:17:32.815 ] 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "subsystem": "iobuf", 00:17:32.815 "config": [ 00:17:32.815 { 00:17:32.815 "method": "iobuf_set_options", 00:17:32.815 "params": { 00:17:32.815 "small_pool_count": 8192, 00:17:32.815 "large_pool_count": 1024, 00:17:32.815 "small_bufsize": 8192, 00:17:32.815 "large_bufsize": 135168, 00:17:32.815 "enable_numa": false 00:17:32.815 } 00:17:32.815 } 00:17:32.815 ] 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "subsystem": "sock", 00:17:32.815 "config": [ 00:17:32.815 { 00:17:32.815 "method": "sock_set_default_impl", 00:17:32.815 "params": { 00:17:32.815 "impl_name": "uring" 00:17:32.815 } 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "method": "sock_impl_set_options", 00:17:32.815 "params": { 00:17:32.815 "impl_name": "ssl", 00:17:32.815 "recv_buf_size": 4096, 00:17:32.815 "send_buf_size": 4096, 00:17:32.815 "enable_recv_pipe": true, 00:17:32.815 "enable_quickack": false, 00:17:32.815 "enable_placement_id": 0, 00:17:32.815 "enable_zerocopy_send_server": true, 00:17:32.815 "enable_zerocopy_send_client": false, 00:17:32.815 "zerocopy_threshold": 0, 00:17:32.815 "tls_version": 0, 00:17:32.815 "enable_ktls": false 00:17:32.815 } 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "method": "sock_impl_set_options", 00:17:32.815 "params": { 00:17:32.815 "impl_name": "posix", 00:17:32.815 "recv_buf_size": 2097152, 00:17:32.815 "send_buf_size": 2097152, 00:17:32.815 "enable_recv_pipe": true, 00:17:32.815 "enable_quickack": false, 00:17:32.815 "enable_placement_id": 0, 00:17:32.815 "enable_zerocopy_send_server": true, 00:17:32.815 "enable_zerocopy_send_client": false, 00:17:32.815 "zerocopy_threshold": 0, 00:17:32.815 "tls_version": 0, 00:17:32.815 "enable_ktls": false 00:17:32.815 } 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "method": "sock_impl_set_options", 00:17:32.815 "params": { 00:17:32.815 "impl_name": "uring", 00:17:32.815 "recv_buf_size": 2097152, 00:17:32.815 "send_buf_size": 2097152, 00:17:32.815 "enable_recv_pipe": true, 00:17:32.815 "enable_quickack": false, 00:17:32.815 "enable_placement_id": 0, 00:17:32.815 "enable_zerocopy_send_server": false, 00:17:32.815 "enable_zerocopy_send_client": false, 00:17:32.815 "zerocopy_threshold": 0, 00:17:32.815 "tls_version": 0, 00:17:32.815 "enable_ktls": false 00:17:32.815 } 00:17:32.815 } 00:17:32.815 ] 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "subsystem": "vmd", 00:17:32.815 "config": [] 00:17:32.815 }, 00:17:32.815 { 00:17:32.815 "subsystem": "accel", 00:17:32.815 "config": [ 00:17:32.815 { 00:17:32.815 "method": "accel_set_options", 00:17:32.815 "params": { 00:17:32.815 "small_cache_size": 128, 00:17:32.816 "large_cache_size": 16, 00:17:32.816 "task_count": 2048, 00:17:32.816 "sequence_count": 2048, 00:17:32.816 "buf_count": 2048 00:17:32.816 } 00:17:32.816 } 00:17:32.816 ] 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "subsystem": "bdev", 00:17:32.816 "config": [ 00:17:32.816 { 00:17:32.816 "method": "bdev_set_options", 00:17:32.816 "params": { 00:17:32.816 "bdev_io_pool_size": 65535, 00:17:32.816 "bdev_io_cache_size": 256, 00:17:32.816 "bdev_auto_examine": true, 00:17:32.816 "iobuf_small_cache_size": 128, 00:17:32.816 "iobuf_large_cache_size": 16 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "bdev_raid_set_options", 00:17:32.816 "params": { 00:17:32.816 "process_window_size_kb": 1024, 00:17:32.816 "process_max_bandwidth_mb_sec": 0 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "bdev_iscsi_set_options", 00:17:32.816 "params": { 00:17:32.816 "timeout_sec": 30 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "bdev_nvme_set_options", 00:17:32.816 "params": { 00:17:32.816 "action_on_timeout": "none", 00:17:32.816 "timeout_us": 0, 00:17:32.816 "timeout_admin_us": 0, 00:17:32.816 "keep_alive_timeout_ms": 10000, 00:17:32.816 "arbitration_burst": 0, 00:17:32.816 "low_priority_weight": 0, 00:17:32.816 "medium_priority_weight": 0, 00:17:32.816 "high_priority_weight": 0, 00:17:32.816 "nvme_adminq_poll_period_us": 10000, 00:17:32.816 "nvme_ioq_poll_period_us": 0, 00:17:32.816 "io_queue_requests": 0, 00:17:32.816 "delay_cmd_submit": true, 00:17:32.816 "transport_retry_count": 4, 00:17:32.816 "bdev_retry_count": 3, 00:17:32.816 "transport_ack_timeout": 0, 00:17:32.816 "ctrlr_loss_timeout_sec": 0, 00:17:32.816 "reconnect_delay_sec": 0, 00:17:32.816 "fast_io_fail_timeout_sec": 0, 00:17:32.816 "disable_auto_failback": false, 00:17:32.816 "generate_uuids": false, 00:17:32.816 "transport_tos": 0, 00:17:32.816 "nvme_error_stat": false, 00:17:32.816 "rdma_srq_size": 0, 00:17:32.816 "io_path_stat": false, 00:17:32.816 "allow_accel_sequence": false, 00:17:32.816 "rdma_max_cq_size": 0, 00:17:32.816 "rdma_cm_event_timeout_ms": 0, 00:17:32.816 "dhchap_digests": [ 00:17:32.816 "sha256", 00:17:32.816 "sha384", 00:17:32.816 "sha512" 00:17:32.816 ], 00:17:32.816 "dhchap_dhgroups": [ 00:17:32.816 "null", 00:17:32.816 "ffdhe2048", 00:17:32.816 "ffdhe3072", 00:17:32.816 "ffdhe4096", 00:17:32.816 "ffdhe6144", 00:17:32.816 "ffdhe8192" 00:17:32.816 ], 00:17:32.816 "rdma_umr_per_io": false 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "bdev_nvme_set_hotplug", 00:17:32.816 "params": { 00:17:32.816 "period_us": 100000, 00:17:32.816 "enable": false 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "bdev_malloc_create", 00:17:32.816 "params": { 00:17:32.816 "name": "malloc0", 00:17:32.816 "num_blocks": 8192, 00:17:32.816 "block_size": 4096, 00:17:32.816 "physical_block_size": 4096, 00:17:32.816 "uuid": "8e7663b0-fb26-45b7-88b2-b7d3c0f73f9e", 00:17:32.816 "optimal_io_boundary": 0, 00:17:32.816 "md_size": 0, 00:17:32.816 "dif_type": 0, 00:17:32.816 "dif_is_head_of_md": false, 00:17:32.816 "dif_pi_format": 0 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "bdev_wait_for_examine" 00:17:32.816 } 00:17:32.816 ] 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "subsystem": "nbd", 00:17:32.816 "config": [] 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "subsystem": "scheduler", 00:17:32.816 "config": [ 00:17:32.816 { 00:17:32.816 "method": "framework_set_scheduler", 00:17:32.816 "params": { 00:17:32.816 "name": "static" 00:17:32.816 } 00:17:32.816 } 00:17:32.816 ] 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "subsystem": "nvmf", 00:17:32.816 "config": [ 00:17:32.816 { 00:17:32.816 "method": "nvmf_set_config", 00:17:32.816 "params": { 00:17:32.816 "discovery_filter": "match_any", 00:17:32.816 "admin_cmd_passthru": { 00:17:32.816 "identify_ctrlr": false 00:17:32.816 }, 00:17:32.816 "dhchap_digests": [ 00:17:32.816 "sha256", 00:17:32.816 "sha384", 00:17:32.816 "sha512" 00:17:32.816 ], 00:17:32.816 "dhchap_dhgroups": [ 00:17:32.816 "null", 00:17:32.816 "ffdhe2048", 00:17:32.816 "ffdhe3072", 00:17:32.816 "ffdhe4096", 00:17:32.816 "ffdhe6144", 00:17:32.816 "ffdhe8192" 00:17:32.816 ] 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "nvmf_set_max_subsystems", 00:17:32.816 "params": { 00:17:32.816 "max_subsystems": 1024 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "nvmf_set_crdt", 00:17:32.816 "params": { 00:17:32.816 "crdt1": 0, 00:17:32.816 "crdt2": 0, 00:17:32.816 "crdt3": 0 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "nvmf_create_transport", 00:17:32.816 "params": { 00:17:32.816 "trtype": "TCP", 00:17:32.816 "max_queue_depth": 128, 00:17:32.816 "max_io_qpairs_per_ctrlr": 127, 00:17:32.816 "in_capsule_data_size": 4096, 00:17:32.816 "max_io_size": 131072, 00:17:32.816 "io_unit_size": 131072, 00:17:32.816 "max_aq_depth": 128, 00:17:32.816 "num_shared_buffers": 511, 00:17:32.816 "buf_cache_size": 4294967295, 00:17:32.816 "dif_insert_or_strip": false, 00:17:32.816 "zcopy": false, 00:17:32.816 "c2h_success": false, 00:17:32.816 "sock_priority": 0, 00:17:32.816 "abort_timeout_sec": 1, 00:17:32.816 "ack_timeout": 0, 00:17:32.816 "data_wr_pool_size": 0 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "nvmf_create_subsystem", 00:17:32.816 "params": { 00:17:32.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.816 "allow_any_host": false, 00:17:32.816 "serial_number": "00000000000000000000", 00:17:32.816 "model_number": "SPDK bdev Controller", 00:17:32.816 "max_namespaces": 32, 00:17:32.816 "min_cntlid": 1, 00:17:32.816 "max_cntlid": 65519, 00:17:32.816 "ana_reporting": false 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "nvmf_subsystem_add_host", 00:17:32.816 "params": { 00:17:32.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.816 "host": "nqn.2016-06.io.spdk:host1", 00:17:32.816 "psk": "key0" 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "nvmf_subsystem_add_ns", 00:17:32.816 "params": { 00:17:32.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.816 "namespace": { 00:17:32.816 "nsid": 1, 00:17:32.816 "bdev_name": "malloc0", 00:17:32.816 "nguid": "8E7663B0FB2645B788B2B7D3C0F73F9E", 00:17:32.816 "uuid": "8e7663b0-fb26-45b7-88b2-b7d3c0f73f9e", 00:17:32.816 "no_auto_visible": false 00:17:32.816 } 00:17:32.816 } 00:17:32.816 }, 00:17:32.816 { 00:17:32.816 "method": "nvmf_subsystem_add_listener", 00:17:32.816 "params": { 00:17:32.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.816 "listen_address": { 00:17:32.816 "trtype": "TCP", 00:17:32.816 "adrfam": "IPv4", 00:17:32.816 "traddr": "10.0.0.3", 00:17:32.816 "trsvcid": "4420" 00:17:32.816 }, 00:17:32.816 "secure_channel": false, 00:17:32.816 "sock_impl": "ssl" 00:17:32.816 } 00:17:32.816 } 00:17:32.816 ] 00:17:32.816 } 00:17:32.816 ] 00:17:32.816 }' 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=77182 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 77182 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77182 ']' 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.816 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.075 [2024-12-16 05:35:13.093477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:33.075 [2024-12-16 05:35:13.093692] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.075 [2024-12-16 05:35:13.274858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.334 [2024-12-16 05:35:13.359360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.334 [2024-12-16 05:35:13.359414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.334 [2024-12-16 05:35:13.359449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.334 [2024-12-16 05:35:13.359471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.334 [2024-12-16 05:35:13.359485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.334 [2024-12-16 05:35:13.360722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.594 [2024-12-16 05:35:13.617287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.594 [2024-12-16 05:35:13.762420] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.594 [2024-12-16 05:35:13.794422] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.594 [2024-12-16 05:35:13.794733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=77214 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 77214 /var/tmp/bdevperf.sock 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 77214 ']' 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.853 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.854 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:33.854 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:33.854 "subsystems": [ 00:17:33.854 { 00:17:33.854 "subsystem": "keyring", 00:17:33.854 "config": [ 00:17:33.854 { 00:17:33.854 "method": "keyring_file_add_key", 00:17:33.854 "params": { 00:17:33.854 "name": "key0", 00:17:33.854 "path": "/tmp/tmp.LAIyw6BYMS" 00:17:33.854 } 00:17:33.854 } 00:17:33.854 ] 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "subsystem": "iobuf", 00:17:33.854 "config": [ 00:17:33.854 { 00:17:33.854 "method": "iobuf_set_options", 00:17:33.854 "params": { 00:17:33.854 "small_pool_count": 8192, 00:17:33.854 "large_pool_count": 1024, 00:17:33.854 "small_bufsize": 8192, 00:17:33.854 "large_bufsize": 135168, 00:17:33.854 "enable_numa": false 00:17:33.854 } 00:17:33.854 } 00:17:33.854 ] 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "subsystem": "sock", 00:17:33.854 "config": [ 00:17:33.854 { 00:17:33.854 "method": "sock_set_default_impl", 00:17:33.854 "params": { 00:17:33.854 "impl_name": "uring" 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "sock_impl_set_options", 00:17:33.854 "params": { 00:17:33.854 "impl_name": "ssl", 00:17:33.854 "recv_buf_size": 4096, 00:17:33.854 "send_buf_size": 4096, 00:17:33.854 "enable_recv_pipe": true, 00:17:33.854 "enable_quickack": false, 00:17:33.854 "enable_placement_id": 0, 00:17:33.854 "enable_zerocopy_send_server": true, 00:17:33.854 "enable_zerocopy_send_client": false, 00:17:33.854 "zerocopy_threshold": 0, 00:17:33.854 "tls_version": 0, 00:17:33.854 "enable_ktls": false 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "sock_impl_set_options", 00:17:33.854 "params": { 00:17:33.854 "impl_name": "posix", 00:17:33.854 "recv_buf_size": 2097152, 00:17:33.854 "send_buf_size": 2097152, 00:17:33.854 "enable_recv_pipe": true, 00:17:33.854 "enable_quickack": false, 00:17:33.854 "enable_placement_id": 0, 00:17:33.854 "enable_zerocopy_send_server": true, 00:17:33.854 "enable_zerocopy_send_client": false, 00:17:33.854 "zerocopy_threshold": 0, 00:17:33.854 "tls_version": 0, 00:17:33.854 "enable_ktls": false 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "sock_impl_set_options", 00:17:33.854 "params": { 00:17:33.854 "impl_name": "uring", 00:17:33.854 "recv_buf_size": 2097152, 00:17:33.854 "send_buf_size": 2097152, 00:17:33.854 "enable_recv_pipe": true, 00:17:33.854 "enable_quickack": false, 00:17:33.854 "enable_placement_id": 0, 00:17:33.854 "enable_zerocopy_send_server": false, 00:17:33.854 "enable_zerocopy_send_client": false, 00:17:33.854 "zerocopy_threshold": 0, 00:17:33.854 "tls_version": 0, 00:17:33.854 "enable_ktls": false 00:17:33.854 } 00:17:33.854 } 00:17:33.854 ] 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "subsystem": "vmd", 00:17:33.854 "config": [] 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "subsystem": "accel", 00:17:33.854 "config": [ 00:17:33.854 { 00:17:33.854 "method": "accel_set_options", 00:17:33.854 "params": { 00:17:33.854 "small_cache_size": 128, 00:17:33.854 "large_cache_size": 16, 00:17:33.854 "task_count": 2048, 00:17:33.854 "sequence_count": 2048, 00:17:33.854 "buf_count": 2048 00:17:33.854 } 00:17:33.854 } 00:17:33.854 ] 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "subsystem": "bdev", 00:17:33.854 "config": [ 00:17:33.854 { 00:17:33.854 "method": "bdev_set_options", 00:17:33.854 "params": { 00:17:33.854 "bdev_io_pool_size": 65535, 00:17:33.854 "bdev_io_cache_size": 256, 00:17:33.854 "bdev_auto_examine": true, 00:17:33.854 "iobuf_small_cache_size": 128, 00:17:33.854 "iobuf_large_cache_size": 16 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "bdev_raid_set_options", 00:17:33.854 "params": { 00:17:33.854 "process_window_size_kb": 1024, 00:17:33.854 "process_max_bandwidth_mb_sec": 0 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "bdev_iscsi_set_options", 00:17:33.854 "params": { 00:17:33.854 "timeout_sec": 30 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "bdev_nvme_set_options", 00:17:33.854 "params": { 00:17:33.854 "action_on_timeout": "none", 00:17:33.854 "timeout_us": 0, 00:17:33.854 "timeout_admin_us": 0, 00:17:33.854 "keep_alive_timeout_ms": 10000, 00:17:33.854 "arbitration_burst": 0, 00:17:33.854 "low_priority_weight": 0, 00:17:33.854 "medium_priority_weight": 0, 00:17:33.854 "high_priority_weight": 0, 00:17:33.854 "nvme_adminq_poll_period_us": 10000, 00:17:33.854 "nvme_ioq_poll_period_us": 0, 00:17:33.854 "io_queue_requests": 512, 00:17:33.854 "delay_cmd_submit": true, 00:17:33.854 "transport_retry_count": 4, 00:17:33.854 "bdev_retry_count": 3, 00:17:33.854 "transport_ack_timeout": 0, 00:17:33.854 "ctrlr_loss_timeout_sec": 0, 00:17:33.854 "reconnect_delay_sec": 0, 00:17:33.854 "fast_io_fail_timeout_sec": 0, 00:17:33.854 "disable_auto_failback": false, 00:17:33.854 "generate_uuids": false, 00:17:33.854 "transport_tos": 0, 00:17:33.854 "nvme_error_stat": false, 00:17:33.854 "rdma_srq_size": 0, 00:17:33.854 "io_path_stat": false, 00:17:33.854 "allow_accel_sequence": false, 00:17:33.854 "rdma_max_cq_size": 0, 00:17:33.854 "rdma_cm_event_timeout_ms": 0, 00:17:33.854 "dhchap_digests": [ 00:17:33.854 "sha256", 00:17:33.854 "sha384", 00:17:33.854 "sha512" 00:17:33.854 ], 00:17:33.854 "dhchap_dhgroups": [ 00:17:33.854 "null", 00:17:33.854 "ffdhe2048", 00:17:33.854 "ffdhe3072", 00:17:33.854 "ffdhe4096", 00:17:33.854 "ffdhe6144", 00:17:33.854 "ffdhe8192" 00:17:33.854 ], 00:17:33.854 "rdma_umr_per_io": false 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "bdev_nvme_attach_controller", 00:17:33.854 "params": { 00:17:33.854 "name": "nvme0", 00:17:33.854 "trtype": "TCP", 00:17:33.854 "adrfam": "IPv4", 00:17:33.854 "traddr": "10.0.0.3", 00:17:33.854 "trsvcid": "4420", 00:17:33.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.854 "prchk_reftag": false, 00:17:33.854 "prchk_guard": false, 00:17:33.854 "ctrlr_loss_timeout_sec": 0, 00:17:33.854 "reconnect_delay_sec": 0, 00:17:33.854 "fast_io_fail_timeout_sec": 0, 00:17:33.854 "psk": "key0", 00:17:33.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.854 "hdgst": false, 00:17:33.854 "ddgst": false, 00:17:33.854 "multipath": "multipath" 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "bdev_nvme_set_hotplug", 00:17:33.854 "params": { 00:17:33.854 "period_us": 100000, 00:17:33.854 "enable": false 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "bdev_enable_histogram", 00:17:33.854 "params": { 00:17:33.854 "name": "nvme0n1", 00:17:33.854 "enable": true 00:17:33.854 } 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "method": "bdev_wait_for_examine" 00:17:33.854 } 00:17:33.854 ] 00:17:33.854 }, 00:17:33.854 { 00:17:33.854 "subsystem": "nbd", 00:17:33.854 "config": [] 00:17:33.854 } 00:17:33.854 ] 00:17:33.854 }' 00:17:34.113 [2024-12-16 05:35:14.147346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:34.113 [2024-12-16 05:35:14.147512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77214 ] 00:17:34.113 [2024-12-16 05:35:14.323492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.372 [2024-12-16 05:35:14.446279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.632 [2024-12-16 05:35:14.677648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.632 [2024-12-16 05:35:14.775054] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.891 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.891 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:34.891 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:34.891 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:35.150 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.150 05:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:35.408 Running I/O for 1 seconds... 00:17:36.345 3316.00 IOPS, 12.95 MiB/s 00:17:36.345 Latency(us) 00:17:36.345 [2024-12-16T05:35:16.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.345 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:36.345 Verification LBA range: start 0x0 length 0x2000 00:17:36.345 nvme0n1 : 1.04 3317.65 12.96 0.00 0.00 38083.22 8281.37 23473.80 00:17:36.345 [2024-12-16T05:35:16.604Z] =================================================================================================================== 00:17:36.345 [2024-12-16T05:35:16.604Z] Total : 3317.65 12.96 0.00 0.00 38083.22 8281.37 23473.80 00:17:36.345 { 00:17:36.345 "results": [ 00:17:36.345 { 00:17:36.345 "job": "nvme0n1", 00:17:36.345 "core_mask": "0x2", 00:17:36.346 "workload": "verify", 00:17:36.346 "status": "finished", 00:17:36.346 "verify_range": { 00:17:36.346 "start": 0, 00:17:36.346 "length": 8192 00:17:36.346 }, 00:17:36.346 "queue_depth": 128, 00:17:36.346 "io_size": 4096, 00:17:36.346 "runtime": 1.038387, 00:17:36.346 "iops": 3317.645540631768, 00:17:36.346 "mibps": 12.959552893092845, 00:17:36.346 "io_failed": 0, 00:17:36.346 "io_timeout": 0, 00:17:36.346 "avg_latency_us": 38083.22122918591, 00:17:36.346 "min_latency_us": 8281.367272727273, 00:17:36.346 "max_latency_us": 23473.803636363635 00:17:36.346 } 00:17:36.346 ], 00:17:36.346 "core_count": 1 00:17:36.346 } 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:36.346 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:36.346 nvmf_trace.0 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 77214 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77214 ']' 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77214 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77214 00:17:36.605 killing process with pid 77214 00:17:36.605 Received shutdown signal, test time was about 1.000000 seconds 00:17:36.605 00:17:36.605 Latency(us) 00:17:36.605 [2024-12-16T05:35:16.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.605 [2024-12-16T05:35:16.864Z] =================================================================================================================== 00:17:36.605 [2024-12-16T05:35:16.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77214' 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77214 00:17:36.605 05:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77214 00:17:37.173 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:37.173 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.173 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.433 rmmod nvme_tcp 00:17:37.433 rmmod nvme_fabrics 00:17:37.433 rmmod nvme_keyring 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 77182 ']' 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 77182 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 77182 ']' 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 77182 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77182 00:17:37.433 killing process with pid 77182 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77182' 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 77182 00:17:37.433 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 77182 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:38.370 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ze7Ebd6bk8 /tmp/tmp.7D8A1lNJnJ /tmp/tmp.LAIyw6BYMS 00:17:38.629 ************************************ 00:17:38.629 END TEST nvmf_tls 00:17:38.629 ************************************ 00:17:38.629 00:17:38.629 real 1m47.330s 00:17:38.629 user 2m57.972s 00:17:38.629 sys 0m26.400s 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:38.629 05:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.630 05:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.630 05:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.630 ************************************ 00:17:38.630 START TEST nvmf_fips 00:17:38.630 ************************************ 00:17:38.630 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:38.890 * Looking for test storage... 00:17:38.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:38.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.890 --rc genhtml_branch_coverage=1 00:17:38.890 --rc genhtml_function_coverage=1 00:17:38.890 --rc genhtml_legend=1 00:17:38.890 --rc geninfo_all_blocks=1 00:17:38.890 --rc geninfo_unexecuted_blocks=1 00:17:38.890 00:17:38.890 ' 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:38.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.890 --rc genhtml_branch_coverage=1 00:17:38.890 --rc genhtml_function_coverage=1 00:17:38.890 --rc genhtml_legend=1 00:17:38.890 --rc geninfo_all_blocks=1 00:17:38.890 --rc geninfo_unexecuted_blocks=1 00:17:38.890 00:17:38.890 ' 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:38.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.890 --rc genhtml_branch_coverage=1 00:17:38.890 --rc genhtml_function_coverage=1 00:17:38.890 --rc genhtml_legend=1 00:17:38.890 --rc geninfo_all_blocks=1 00:17:38.890 --rc geninfo_unexecuted_blocks=1 00:17:38.890 00:17:38.890 ' 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:38.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.890 --rc genhtml_branch_coverage=1 00:17:38.890 --rc genhtml_function_coverage=1 00:17:38.890 --rc genhtml_legend=1 00:17:38.890 --rc geninfo_all_blocks=1 00:17:38.890 --rc geninfo_unexecuted_blocks=1 00:17:38.890 00:17:38.890 ' 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.890 05:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.890 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:38.891 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:39.152 Error setting digest 00:17:39.152 40729D8CEA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:39.152 40729D8CEA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:39.152 Cannot find device "nvmf_init_br" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:39.152 Cannot find device "nvmf_init_br2" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:39.152 Cannot find device "nvmf_tgt_br" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.152 Cannot find device "nvmf_tgt_br2" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:39.152 Cannot find device "nvmf_init_br" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:39.152 Cannot find device "nvmf_init_br2" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:39.152 Cannot find device "nvmf_tgt_br" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:39.152 Cannot find device "nvmf_tgt_br2" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:39.152 Cannot find device "nvmf_br" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:39.152 Cannot find device "nvmf_init_if" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:39.152 Cannot find device "nvmf_init_if2" 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:39.152 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:39.153 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:39.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:39.412 00:17:39.412 --- 10.0.0.3 ping statistics --- 00:17:39.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.412 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:39.412 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:39.412 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:17:39.412 00:17:39.412 --- 10.0.0.4 ping statistics --- 00:17:39.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.412 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:39.412 00:17:39.412 --- 10.0.0.1 ping statistics --- 00:17:39.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.412 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:39.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:39.412 00:17:39.412 --- 10.0.0.2 ping statistics --- 00:17:39.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.412 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=77544 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 77544 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 77544 ']' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.412 05:35:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:39.671 [2024-12-16 05:35:19.730374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:39.671 [2024-12-16 05:35:19.730546] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.672 [2024-12-16 05:35:19.919558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.930 [2024-12-16 05:35:20.044749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.930 [2024-12-16 05:35:20.045079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.930 [2024-12-16 05:35:20.045122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.930 [2024-12-16 05:35:20.045139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.930 [2024-12-16 05:35:20.045155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.930 [2024-12-16 05:35:20.046639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.189 [2024-12-16 05:35:20.218288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:40.448 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:40.707 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:40.707 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.mWu 00:17:40.707 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:40.707 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.mWu 00:17:40.707 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.mWu 00:17:40.707 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.mWu 00:17:40.707 05:35:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.965 [2024-12-16 05:35:21.005122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.965 [2024-12-16 05:35:21.021066] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.965 [2024-12-16 05:35:21.021325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:40.965 malloc0 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=77580 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 77580 /var/tmp/bdevperf.sock 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 77580 ']' 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.965 05:35:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:41.223 [2024-12-16 05:35:21.226661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:41.223 [2024-12-16 05:35:21.226863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:17:41.223 [2024-12-16 05:35:21.400143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.482 [2024-12-16 05:35:21.525312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.482 [2024-12-16 05:35:21.717299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.050 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.050 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:42.050 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.mWu 00:17:42.309 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:42.568 [2024-12-16 05:35:22.590210] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.568 TLSTESTn1 00:17:42.568 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.568 Running I/O for 10 seconds... 00:17:44.883 3240.00 IOPS, 12.66 MiB/s [2024-12-16T05:35:26.078Z] 3263.00 IOPS, 12.75 MiB/s [2024-12-16T05:35:27.015Z] 3261.00 IOPS, 12.74 MiB/s [2024-12-16T05:35:27.953Z] 3278.75 IOPS, 12.81 MiB/s [2024-12-16T05:35:28.916Z] 3294.80 IOPS, 12.87 MiB/s [2024-12-16T05:35:29.855Z] 3300.50 IOPS, 12.89 MiB/s [2024-12-16T05:35:31.232Z] 3306.00 IOPS, 12.91 MiB/s [2024-12-16T05:35:32.169Z] 3306.00 IOPS, 12.91 MiB/s [2024-12-16T05:35:33.107Z] 3297.56 IOPS, 12.88 MiB/s [2024-12-16T05:35:33.107Z] 3302.00 IOPS, 12.90 MiB/s 00:17:52.848 Latency(us) 00:17:52.848 [2024-12-16T05:35:33.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.848 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:52.848 Verification LBA range: start 0x0 length 0x2000 00:17:52.848 TLSTESTn1 : 10.02 3307.81 12.92 0.00 0.00 38627.23 6225.92 35746.91 00:17:52.848 [2024-12-16T05:35:33.107Z] =================================================================================================================== 00:17:52.848 [2024-12-16T05:35:33.107Z] Total : 3307.81 12.92 0.00 0.00 38627.23 6225.92 35746.91 00:17:52.848 { 00:17:52.848 "results": [ 00:17:52.848 { 00:17:52.848 "job": "TLSTESTn1", 00:17:52.848 "core_mask": "0x4", 00:17:52.848 "workload": "verify", 00:17:52.848 "status": "finished", 00:17:52.848 "verify_range": { 00:17:52.848 "start": 0, 00:17:52.848 "length": 8192 00:17:52.848 }, 00:17:52.848 "queue_depth": 128, 00:17:52.848 "io_size": 4096, 00:17:52.848 "runtime": 10.018404, 00:17:52.848 "iops": 3307.8123022389595, 00:17:52.848 "mibps": 12.921141805620936, 00:17:52.848 "io_failed": 0, 00:17:52.848 "io_timeout": 0, 00:17:52.848 "avg_latency_us": 38627.232102795664, 00:17:52.848 "min_latency_us": 6225.92, 00:17:52.848 "max_latency_us": 35746.90909090909 00:17:52.848 } 00:17:52.848 ], 00:17:52.848 "core_count": 1 00:17:52.848 } 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:52.848 nvmf_trace.0 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 77580 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 77580 ']' 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 77580 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77580 00:17:52.848 killing process with pid 77580 00:17:52.848 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.848 00:17:52.848 Latency(us) 00:17:52.848 [2024-12-16T05:35:33.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.848 [2024-12-16T05:35:33.107Z] =================================================================================================================== 00:17:52.848 [2024-12-16T05:35:33.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77580' 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 77580 00:17:52.848 05:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 77580 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:53.786 rmmod nvme_tcp 00:17:53.786 rmmod nvme_fabrics 00:17:53.786 rmmod nvme_keyring 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 77544 ']' 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 77544 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 77544 ']' 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 77544 00:17:53.786 05:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:53.786 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.786 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77544 00:17:53.786 killing process with pid 77544 00:17:53.786 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.786 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.786 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77544' 00:17:53.786 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 77544 00:17:53.786 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 77544 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:54.723 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:54.982 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.mWu 00:17:54.982 ************************************ 00:17:54.982 END TEST nvmf_fips 00:17:54.982 ************************************ 00:17:54.982 00:17:54.982 real 0m16.390s 00:17:54.982 user 0m23.687s 00:17:54.982 sys 0m5.370s 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.982 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.241 ************************************ 00:17:55.241 START TEST nvmf_control_msg_list 00:17:55.241 ************************************ 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:55.241 * Looking for test storage... 00:17:55.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.241 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:55.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.242 --rc genhtml_branch_coverage=1 00:17:55.242 --rc genhtml_function_coverage=1 00:17:55.242 --rc genhtml_legend=1 00:17:55.242 --rc geninfo_all_blocks=1 00:17:55.242 --rc geninfo_unexecuted_blocks=1 00:17:55.242 00:17:55.242 ' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:55.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.242 --rc genhtml_branch_coverage=1 00:17:55.242 --rc genhtml_function_coverage=1 00:17:55.242 --rc genhtml_legend=1 00:17:55.242 --rc geninfo_all_blocks=1 00:17:55.242 --rc geninfo_unexecuted_blocks=1 00:17:55.242 00:17:55.242 ' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:55.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.242 --rc genhtml_branch_coverage=1 00:17:55.242 --rc genhtml_function_coverage=1 00:17:55.242 --rc genhtml_legend=1 00:17:55.242 --rc geninfo_all_blocks=1 00:17:55.242 --rc geninfo_unexecuted_blocks=1 00:17:55.242 00:17:55.242 ' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:55.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.242 --rc genhtml_branch_coverage=1 00:17:55.242 --rc genhtml_function_coverage=1 00:17:55.242 --rc genhtml_legend=1 00:17:55.242 --rc geninfo_all_blocks=1 00:17:55.242 --rc geninfo_unexecuted_blocks=1 00:17:55.242 00:17:55.242 ' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.242 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.242 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:55.500 Cannot find device "nvmf_init_br" 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:55.500 Cannot find device "nvmf_init_br2" 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:55.500 Cannot find device "nvmf_tgt_br" 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.500 Cannot find device "nvmf_tgt_br2" 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:55.500 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:55.500 Cannot find device "nvmf_init_br" 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:55.501 Cannot find device "nvmf_init_br2" 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:55.501 Cannot find device "nvmf_tgt_br" 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:55.501 Cannot find device "nvmf_tgt_br2" 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:55.501 Cannot find device "nvmf_br" 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:55.501 Cannot find device "nvmf_init_if" 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:55.501 Cannot find device "nvmf_init_if2" 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:55.501 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:55.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:55.759 00:17:55.759 --- 10.0.0.3 ping statistics --- 00:17:55.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.759 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:55.759 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:55.759 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:55.759 00:17:55.759 --- 10.0.0.4 ping statistics --- 00:17:55.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.759 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:17:55.759 00:17:55.759 --- 10.0.0.1 ping statistics --- 00:17:55.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.759 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:55.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:17:55.759 00:17:55.759 --- 10.0.0.2 ping statistics --- 00:17:55.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.759 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=77982 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 77982 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 77982 ']' 00:17:55.759 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.760 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.760 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.760 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.760 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:56.018 [2024-12-16 05:35:36.019581] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:56.018 [2024-12-16 05:35:36.019968] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.018 [2024-12-16 05:35:36.206306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.278 [2024-12-16 05:35:36.331870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.278 [2024-12-16 05:35:36.331942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.278 [2024-12-16 05:35:36.331974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.278 [2024-12-16 05:35:36.332003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.278 [2024-12-16 05:35:36.332021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.278 [2024-12-16 05:35:36.333463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.278 [2024-12-16 05:35:36.527192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.846 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.846 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:56.846 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.846 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.846 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 [2024-12-16 05:35:37.030517] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 Malloc0 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 [2024-12-16 05:35:37.093135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=78014 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=78015 00:17:56.846 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=78016 00:17:56.847 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:56.847 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 78014 00:17:57.105 [2024-12-16 05:35:37.347912] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:57.105 [2024-12-16 05:35:37.358974] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:57.105 [2024-12-16 05:35:37.359530] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:58.482 Initializing NVMe Controllers 00:17:58.482 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:58.482 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:58.482 Initialization complete. Launching workers. 00:17:58.482 ======================================================== 00:17:58.482 Latency(us) 00:17:58.482 Device Information : IOPS MiB/s Average min max 00:17:58.482 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2644.00 10.33 377.64 174.51 1416.52 00:17:58.482 ======================================================== 00:17:58.482 Total : 2644.00 10.33 377.64 174.51 1416.52 00:17:58.482 00:17:58.482 Initializing NVMe Controllers 00:17:58.482 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:58.482 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:58.482 Initialization complete. Launching workers. 00:17:58.482 ======================================================== 00:17:58.482 Latency(us) 00:17:58.482 Device Information : IOPS MiB/s Average min max 00:17:58.482 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2678.00 10.46 372.82 240.95 689.92 00:17:58.482 ======================================================== 00:17:58.482 Total : 2678.00 10.46 372.82 240.95 689.92 00:17:58.482 00:17:58.482 Initializing NVMe Controllers 00:17:58.482 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:58.482 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:58.482 Initialization complete. Launching workers. 00:17:58.482 ======================================================== 00:17:58.482 Latency(us) 00:17:58.482 Device Information : IOPS MiB/s Average min max 00:17:58.483 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2673.96 10.45 373.57 205.28 795.39 00:17:58.483 ======================================================== 00:17:58.483 Total : 2673.96 10.45 373.57 205.28 795.39 00:17:58.483 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 78015 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 78016 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.483 rmmod nvme_tcp 00:17:58.483 rmmod nvme_fabrics 00:17:58.483 rmmod nvme_keyring 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 77982 ']' 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 77982 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 77982 ']' 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 77982 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77982 00:17:58.483 killing process with pid 77982 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77982' 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 77982 00:17:58.483 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 77982 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:59.420 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:59.421 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:59.680 00:17:59.680 real 0m4.480s 00:17:59.680 user 0m6.642s 00:17:59.680 sys 0m1.574s 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.680 ************************************ 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:59.680 END TEST nvmf_control_msg_list 00:17:59.680 ************************************ 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.680 ************************************ 00:17:59.680 START TEST nvmf_wait_for_buf 00:17:59.680 ************************************ 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:59.680 * Looking for test storage... 00:17:59.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.680 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:59.939 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.940 --rc genhtml_branch_coverage=1 00:17:59.940 --rc genhtml_function_coverage=1 00:17:59.940 --rc genhtml_legend=1 00:17:59.940 --rc geninfo_all_blocks=1 00:17:59.940 --rc geninfo_unexecuted_blocks=1 00:17:59.940 00:17:59.940 ' 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.940 --rc genhtml_branch_coverage=1 00:17:59.940 --rc genhtml_function_coverage=1 00:17:59.940 --rc genhtml_legend=1 00:17:59.940 --rc geninfo_all_blocks=1 00:17:59.940 --rc geninfo_unexecuted_blocks=1 00:17:59.940 00:17:59.940 ' 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.940 --rc genhtml_branch_coverage=1 00:17:59.940 --rc genhtml_function_coverage=1 00:17:59.940 --rc genhtml_legend=1 00:17:59.940 --rc geninfo_all_blocks=1 00:17:59.940 --rc geninfo_unexecuted_blocks=1 00:17:59.940 00:17:59.940 ' 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.940 --rc genhtml_branch_coverage=1 00:17:59.940 --rc genhtml_function_coverage=1 00:17:59.940 --rc genhtml_legend=1 00:17:59.940 --rc geninfo_all_blocks=1 00:17:59.940 --rc geninfo_unexecuted_blocks=1 00:17:59.940 00:17:59.940 ' 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.940 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:59.940 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:59.941 Cannot find device "nvmf_init_br" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:59.941 Cannot find device "nvmf_init_br2" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:59.941 Cannot find device "nvmf_tgt_br" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.941 Cannot find device "nvmf_tgt_br2" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:59.941 Cannot find device "nvmf_init_br" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:59.941 Cannot find device "nvmf_init_br2" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:59.941 Cannot find device "nvmf_tgt_br" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:59.941 Cannot find device "nvmf_tgt_br2" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:59.941 Cannot find device "nvmf_br" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:59.941 Cannot find device "nvmf_init_if" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:59.941 Cannot find device "nvmf_init_if2" 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.941 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.200 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.200 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.200 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.200 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.200 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.200 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.200 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.157 ms 00:18:00.201 00:18:00.201 --- 10.0.0.3 ping statistics --- 00:18:00.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.201 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.201 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.201 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:18:00.201 00:18:00.201 --- 10.0.0.4 ping statistics --- 00:18:00.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.201 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:00.201 00:18:00.201 --- 10.0.0.1 ping statistics --- 00:18:00.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.201 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:00.201 00:18:00.201 --- 10.0.0.2 ping statistics --- 00:18:00.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.201 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=78256 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 78256 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 78256 ']' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.201 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:00.460 [2024-12-16 05:35:40.525703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:00.460 [2024-12-16 05:35:40.525848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.460 [2024-12-16 05:35:40.702915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.721 [2024-12-16 05:35:40.831525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.721 [2024-12-16 05:35:40.831617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.721 [2024-12-16 05:35:40.831643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.721 [2024-12-16 05:35:40.831671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.721 [2024-12-16 05:35:40.831689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.721 [2024-12-16 05:35:40.833134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.289 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 [2024-12-16 05:35:41.634488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 Malloc0 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 [2024-12-16 05:35:41.772667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.549 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:01.549 [2024-12-16 05:35:41.805118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.808 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.808 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:01.808 [2024-12-16 05:35:42.043868] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:03.185 Initializing NVMe Controllers 00:18:03.185 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:03.185 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:03.185 Initialization complete. Launching workers. 00:18:03.185 ======================================================== 00:18:03.185 Latency(us) 00:18:03.185 Device Information : IOPS MiB/s Average min max 00:18:03.185 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 494.04 61.75 8096.40 6950.70 16016.90 00:18:03.185 ======================================================== 00:18:03.185 Total : 494.04 61.75 8096.40 6950.70 16016.90 00:18:03.185 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4712 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4712 -eq 0 ]] 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.185 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.444 rmmod nvme_tcp 00:18:03.444 rmmod nvme_fabrics 00:18:03.444 rmmod nvme_keyring 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 78256 ']' 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 78256 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 78256 ']' 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 78256 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78256 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.444 killing process with pid 78256 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78256' 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 78256 00:18:03.444 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 78256 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.382 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:04.669 00:18:04.669 real 0m4.846s 00:18:04.669 user 0m4.318s 00:18:04.669 sys 0m0.894s 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:04.669 ************************************ 00:18:04.669 END TEST nvmf_wait_for_buf 00:18:04.669 ************************************ 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.669 ************************************ 00:18:04.669 START TEST nvmf_fuzz 00:18:04.669 ************************************ 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:04.669 * Looking for test storage... 00:18:04.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.669 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:04.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.670 --rc genhtml_branch_coverage=1 00:18:04.670 --rc genhtml_function_coverage=1 00:18:04.670 --rc genhtml_legend=1 00:18:04.670 --rc geninfo_all_blocks=1 00:18:04.670 --rc geninfo_unexecuted_blocks=1 00:18:04.670 00:18:04.670 ' 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:04.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.670 --rc genhtml_branch_coverage=1 00:18:04.670 --rc genhtml_function_coverage=1 00:18:04.670 --rc genhtml_legend=1 00:18:04.670 --rc geninfo_all_blocks=1 00:18:04.670 --rc geninfo_unexecuted_blocks=1 00:18:04.670 00:18:04.670 ' 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:04.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.670 --rc genhtml_branch_coverage=1 00:18:04.670 --rc genhtml_function_coverage=1 00:18:04.670 --rc genhtml_legend=1 00:18:04.670 --rc geninfo_all_blocks=1 00:18:04.670 --rc geninfo_unexecuted_blocks=1 00:18:04.670 00:18:04.670 ' 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:04.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.670 --rc genhtml_branch_coverage=1 00:18:04.670 --rc genhtml_function_coverage=1 00:18:04.670 --rc genhtml_legend=1 00:18:04.670 --rc geninfo_all_blocks=1 00:18:04.670 --rc geninfo_unexecuted_blocks=1 00:18:04.670 00:18:04.670 ' 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.670 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:04.940 Cannot find device "nvmf_init_br" 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:04.940 Cannot find device "nvmf_init_br2" 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:04.940 Cannot find device "nvmf_tgt_br" 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.940 Cannot find device "nvmf_tgt_br2" 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:04.940 Cannot find device "nvmf_init_br" 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:04.940 Cannot find device "nvmf_init_br2" 00:18:04.940 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:04.940 Cannot find device "nvmf_tgt_br" 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:04.940 Cannot find device "nvmf_tgt_br2" 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:04.940 Cannot find device "nvmf_br" 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:04.940 Cannot find device "nvmf_init_if" 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:04.940 Cannot find device "nvmf_init_if2" 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:04.940 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:05.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:05.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:05.199 00:18:05.199 --- 10.0.0.3 ping statistics --- 00:18:05.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.199 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:05.199 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:05.199 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:18:05.199 00:18:05.199 --- 10.0.0.4 ping statistics --- 00:18:05.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.199 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:05.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:05.199 00:18:05.199 --- 10.0.0.1 ping statistics --- 00:18:05.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.199 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:05.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:18:05.199 00:18:05.199 --- 10.0.0.2 ping statistics --- 00:18:05.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.199 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78560 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78560 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 78560 ']' 00:18:05.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.199 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 Malloc0 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:18:06.576 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:18:07.144 Shutting down the fuzz application 00:18:07.144 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:07.403 Shutting down the fuzz application 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:07.403 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.662 rmmod nvme_tcp 00:18:07.662 rmmod nvme_fabrics 00:18:07.662 rmmod nvme_keyring 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 78560 ']' 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 78560 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 78560 ']' 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 78560 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78560 00:18:07.662 killing process with pid 78560 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78560' 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 78560 00:18:07.662 05:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 78560 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:08.599 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:08.600 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:08.600 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:08.600 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:08.600 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.859 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:08.859 ************************************ 00:18:08.859 END TEST nvmf_fuzz 00:18:08.859 ************************************ 00:18:08.859 00:18:08.859 real 0m4.328s 00:18:08.859 user 0m4.528s 00:18:08.859 sys 0m0.880s 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.859 ************************************ 00:18:08.859 START TEST nvmf_multiconnection 00:18:08.859 ************************************ 00:18:08.859 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:09.119 * Looking for test storage... 00:18:09.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.119 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:09.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.120 --rc genhtml_branch_coverage=1 00:18:09.120 --rc genhtml_function_coverage=1 00:18:09.120 --rc genhtml_legend=1 00:18:09.120 --rc geninfo_all_blocks=1 00:18:09.120 --rc geninfo_unexecuted_blocks=1 00:18:09.120 00:18:09.120 ' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:09.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.120 --rc genhtml_branch_coverage=1 00:18:09.120 --rc genhtml_function_coverage=1 00:18:09.120 --rc genhtml_legend=1 00:18:09.120 --rc geninfo_all_blocks=1 00:18:09.120 --rc geninfo_unexecuted_blocks=1 00:18:09.120 00:18:09.120 ' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:09.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.120 --rc genhtml_branch_coverage=1 00:18:09.120 --rc genhtml_function_coverage=1 00:18:09.120 --rc genhtml_legend=1 00:18:09.120 --rc geninfo_all_blocks=1 00:18:09.120 --rc geninfo_unexecuted_blocks=1 00:18:09.120 00:18:09.120 ' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:09.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.120 --rc genhtml_branch_coverage=1 00:18:09.120 --rc genhtml_function_coverage=1 00:18:09.120 --rc genhtml_legend=1 00:18:09.120 --rc geninfo_all_blocks=1 00:18:09.120 --rc geninfo_unexecuted_blocks=1 00:18:09.120 00:18:09.120 ' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.120 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:09.121 Cannot find device "nvmf_init_br" 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:09.121 Cannot find device "nvmf_init_br2" 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:09.121 Cannot find device "nvmf_tgt_br" 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.121 Cannot find device "nvmf_tgt_br2" 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:09.121 Cannot find device "nvmf_init_br" 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:09.121 Cannot find device "nvmf_init_br2" 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:18:09.121 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:09.380 Cannot find device "nvmf_tgt_br" 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:09.380 Cannot find device "nvmf_tgt_br2" 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:09.380 Cannot find device "nvmf_br" 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:09.380 Cannot find device "nvmf_init_if" 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:09.380 Cannot find device "nvmf_init_if2" 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:09.380 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.381 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:09.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:09.640 00:18:09.640 --- 10.0.0.3 ping statistics --- 00:18:09.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.640 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:09.640 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:09.640 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:18:09.640 00:18:09.640 --- 10.0.0.4 ping statistics --- 00:18:09.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.640 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:09.640 00:18:09.640 --- 10.0.0.1 ping statistics --- 00:18:09.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.640 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:09.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:18:09.640 00:18:09.640 --- 10.0.0.2 ping statistics --- 00:18:09.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.640 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.640 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=78816 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 78816 00:18:09.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 78816 ']' 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.641 05:35:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:09.641 [2024-12-16 05:35:49.808638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:09.641 [2024-12-16 05:35:49.809631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.900 [2024-12-16 05:35:49.993640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.900 [2024-12-16 05:35:50.087340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.900 [2024-12-16 05:35:50.087391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.900 [2024-12-16 05:35:50.087407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.900 [2024-12-16 05:35:50.087418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.900 [2024-12-16 05:35:50.087429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.900 [2024-12-16 05:35:50.089309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.900 [2024-12-16 05:35:50.089481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.900 [2024-12-16 05:35:50.089650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.900 [2024-12-16 05:35:50.089900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.159 [2024-12-16 05:35:50.261341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.726 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.726 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:18:10.726 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.726 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.727 [2024-12-16 05:35:50.814261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.727 Malloc1 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.727 [2024-12-16 05:35:50.933156] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.727 05:35:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 Malloc2 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 Malloc3 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 Malloc4 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.986 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 Malloc5 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 Malloc6 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 Malloc7 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.246 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 Malloc8 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 Malloc9 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 Malloc10 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.506 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.765 Malloc11 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.765 05:35:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:11.765 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:11.765 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:11.765 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.765 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:11.765 05:35:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:14.300 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:16.204 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.107 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:18:18.365 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:18.365 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:18.365 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.365 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:18.365 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.269 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:18:20.528 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:20.528 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:20.528 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.528 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:20.528 05:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.498 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:18:22.757 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:22.757 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:22.757 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.757 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:22.757 05:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:24.661 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:24.661 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:24.661 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:18:24.661 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:24.661 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.661 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:24.661 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:24.662 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:18:24.920 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:24.920 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:24.920 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.920 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:24.920 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:26.821 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:18:27.080 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:27.080 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:27.080 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.080 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:27.080 05:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.983 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:18:29.242 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:29.242 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:29.242 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.242 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:29.242 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.146 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:18:31.404 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:31.404 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:31.404 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.404 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:31.404 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:33.309 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:18:33.567 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:33.567 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:33.567 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.567 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:33.567 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:36.101 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:36.101 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:36.101 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:18:36.101 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:36.101 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.101 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:36.101 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:36.101 [global] 00:18:36.101 thread=1 00:18:36.101 invalidate=1 00:18:36.101 rw=read 00:18:36.101 time_based=1 00:18:36.101 runtime=10 00:18:36.101 ioengine=libaio 00:18:36.101 direct=1 00:18:36.101 bs=262144 00:18:36.101 iodepth=64 00:18:36.101 norandommap=1 00:18:36.101 numjobs=1 00:18:36.101 00:18:36.101 [job0] 00:18:36.101 filename=/dev/nvme0n1 00:18:36.101 [job1] 00:18:36.101 filename=/dev/nvme10n1 00:18:36.101 [job2] 00:18:36.101 filename=/dev/nvme1n1 00:18:36.101 [job3] 00:18:36.101 filename=/dev/nvme2n1 00:18:36.101 [job4] 00:18:36.101 filename=/dev/nvme3n1 00:18:36.101 [job5] 00:18:36.102 filename=/dev/nvme4n1 00:18:36.102 [job6] 00:18:36.102 filename=/dev/nvme5n1 00:18:36.102 [job7] 00:18:36.102 filename=/dev/nvme6n1 00:18:36.102 [job8] 00:18:36.102 filename=/dev/nvme7n1 00:18:36.102 [job9] 00:18:36.102 filename=/dev/nvme8n1 00:18:36.102 [job10] 00:18:36.102 filename=/dev/nvme9n1 00:18:36.102 Could not set queue depth (nvme0n1) 00:18:36.102 Could not set queue depth (nvme10n1) 00:18:36.102 Could not set queue depth (nvme1n1) 00:18:36.102 Could not set queue depth (nvme2n1) 00:18:36.102 Could not set queue depth (nvme3n1) 00:18:36.102 Could not set queue depth (nvme4n1) 00:18:36.102 Could not set queue depth (nvme5n1) 00:18:36.102 Could not set queue depth (nvme6n1) 00:18:36.102 Could not set queue depth (nvme7n1) 00:18:36.102 Could not set queue depth (nvme8n1) 00:18:36.102 Could not set queue depth (nvme9n1) 00:18:36.102 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:36.102 fio-3.35 00:18:36.102 Starting 11 threads 00:18:48.313 00:18:48.313 job0: (groupid=0, jobs=1): err= 0: pid=79277: Mon Dec 16 05:36:26 2024 00:18:48.313 read: IOPS=586, BW=147MiB/s (154MB/s)(1474MiB/10042msec) 00:18:48.313 slat (usec): min=21, max=139942, avg=1691.07, stdev=4667.39 00:18:48.313 clat (msec): min=14, max=348, avg=107.23, stdev=25.56 00:18:48.313 lat (msec): min=15, max=348, avg=108.92, stdev=25.83 00:18:48.313 clat percentiles (msec): 00:18:48.313 | 1.00th=[ 70], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 99], 00:18:48.313 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 107], 00:18:48.313 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 114], 95.00th=[ 120], 00:18:48.313 | 99.00th=[ 239], 99.50th=[ 309], 99.90th=[ 338], 99.95th=[ 351], 00:18:48.313 | 99.99th=[ 351] 00:18:48.313 bw ( KiB/s): min=69120, max=158720, per=24.02%, avg=149273.60, stdev=19759.61, samples=20 00:18:48.313 iops : min= 270, max= 620, avg=583.10, stdev=77.19, samples=20 00:18:48.313 lat (msec) : 20=0.05%, 50=0.10%, 100=25.64%, 250=73.28%, 500=0.93% 00:18:48.313 cpu : usr=0.45%, sys=2.52%, ctx=1211, majf=0, minf=4097 00:18:48.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:48.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.313 issued rwts: total=5894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.313 job1: (groupid=0, jobs=1): err= 0: pid=79278: Mon Dec 16 05:36:26 2024 00:18:48.313 read: IOPS=594, BW=149MiB/s (156MB/s)(1491MiB/10040msec) 00:18:48.313 slat (usec): min=20, max=81013, avg=1671.07, stdev=4028.14 00:18:48.313 clat (msec): min=36, max=248, avg=105.90, stdev=16.68 00:18:48.313 lat (msec): min=44, max=253, avg=107.57, stdev=16.95 00:18:48.313 clat percentiles (msec): 00:18:48.313 | 1.00th=[ 66], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 99], 00:18:48.313 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:18:48.313 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 115], 95.00th=[ 123], 00:18:48.313 | 99.00th=[ 184], 99.50th=[ 213], 99.90th=[ 241], 99.95th=[ 249], 00:18:48.313 | 99.99th=[ 249] 00:18:48.313 bw ( KiB/s): min=81920, max=161280, per=24.31%, avg=151091.20, stdev=16669.43, samples=20 00:18:48.313 iops : min= 320, max= 630, avg=590.20, stdev=65.11, samples=20 00:18:48.313 lat (msec) : 50=0.40%, 100=26.29%, 250=73.31% 00:18:48.313 cpu : usr=0.42%, sys=2.52%, ctx=1254, majf=0, minf=4097 00:18:48.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:48.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.313 issued rwts: total=5965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.313 job2: (groupid=0, jobs=1): err= 0: pid=79279: Mon Dec 16 05:36:26 2024 00:18:48.313 read: IOPS=123, BW=30.8MiB/s (32.3MB/s)(314MiB/10192msec) 00:18:48.313 slat (usec): min=19, max=234289, avg=7268.78, stdev=20491.69 00:18:48.313 clat (msec): min=51, max=708, avg=511.36, stdev=88.92 00:18:48.313 lat (msec): min=52, max=708, avg=518.63, stdev=90.10 00:18:48.313 clat percentiles (msec): 00:18:48.313 | 1.00th=[ 178], 5.00th=[ 317], 10.00th=[ 447], 20.00th=[ 472], 00:18:48.313 | 30.00th=[ 489], 40.00th=[ 506], 50.00th=[ 523], 60.00th=[ 535], 00:18:48.313 | 70.00th=[ 558], 80.00th=[ 575], 90.00th=[ 600], 95.00th=[ 625], 00:18:48.313 | 99.00th=[ 659], 99.50th=[ 684], 99.90th=[ 709], 99.95th=[ 709], 00:18:48.313 | 99.99th=[ 709] 00:18:48.313 bw ( KiB/s): min=21504, max=34816, per=4.91%, avg=30515.20, stdev=3462.21, samples=20 00:18:48.313 iops : min= 84, max= 136, avg=119.20, stdev=13.52, samples=20 00:18:48.313 lat (msec) : 100=0.48%, 250=2.15%, 500=33.28%, 750=64.09% 00:18:48.313 cpu : usr=0.06%, sys=0.61%, ctx=269, majf=0, minf=4097 00:18:48.313 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:18:48.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.313 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.313 issued rwts: total=1256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.313 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.313 job3: (groupid=0, jobs=1): err= 0: pid=79280: Mon Dec 16 05:36:26 2024 00:18:48.313 read: IOPS=114, BW=28.7MiB/s (30.1MB/s)(292MiB/10189msec) 00:18:48.313 slat (usec): min=20, max=385427, avg=8564.40, stdev=27147.25 00:18:48.313 clat (msec): min=46, max=898, avg=548.33, stdev=88.14 00:18:48.313 lat (msec): min=47, max=898, avg=556.90, stdev=88.68 00:18:48.313 clat percentiles (msec): 00:18:48.313 | 1.00th=[ 271], 5.00th=[ 426], 10.00th=[ 456], 20.00th=[ 481], 00:18:48.313 | 30.00th=[ 502], 40.00th=[ 527], 50.00th=[ 558], 60.00th=[ 584], 00:18:48.313 | 70.00th=[ 600], 80.00th=[ 625], 90.00th=[ 651], 95.00th=[ 659], 00:18:48.313 | 99.00th=[ 709], 99.50th=[ 735], 99.90th=[ 743], 99.95th=[ 902], 00:18:48.313 | 99.99th=[ 902] 00:18:48.313 bw ( KiB/s): min=14364, max=38912, per=4.55%, avg=28289.40, stdev=6599.56, samples=20 00:18:48.313 iops : min= 56, max= 152, avg=110.50, stdev=25.79, samples=20 00:18:48.314 lat (msec) : 50=0.34%, 250=0.51%, 500=29.00%, 750=70.06%, 1000=0.09% 00:18:48.314 cpu : usr=0.09%, sys=0.52%, ctx=209, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 job4: (groupid=0, jobs=1): err= 0: pid=79281: Mon Dec 16 05:36:26 2024 00:18:48.314 read: IOPS=172, BW=43.0MiB/s (45.1MB/s)(437MiB/10159msec) 00:18:48.314 slat (usec): min=20, max=104257, avg=5720.54, stdev=13879.08 00:18:48.314 clat (msec): min=19, max=516, avg=365.48, stdev=66.83 00:18:48.314 lat (msec): min=19, max=516, avg=371.20, stdev=67.55 00:18:48.314 clat percentiles (msec): 00:18:48.314 | 1.00th=[ 144], 5.00th=[ 218], 10.00th=[ 284], 20.00th=[ 334], 00:18:48.314 | 30.00th=[ 355], 40.00th=[ 368], 50.00th=[ 380], 60.00th=[ 393], 00:18:48.314 | 70.00th=[ 401], 80.00th=[ 414], 90.00th=[ 426], 95.00th=[ 443], 00:18:48.314 | 99.00th=[ 472], 99.50th=[ 477], 99.90th=[ 518], 99.95th=[ 518], 00:18:48.314 | 99.99th=[ 518] 00:18:48.314 bw ( KiB/s): min=38400, max=58484, per=6.94%, avg=43141.80, stdev=4614.40, samples=20 00:18:48.314 iops : min= 150, max= 228, avg=168.50, stdev=17.95, samples=20 00:18:48.314 lat (msec) : 20=0.06%, 50=0.46%, 100=0.29%, 250=7.26%, 500=91.60% 00:18:48.314 lat (msec) : 750=0.34% 00:18:48.314 cpu : usr=0.10%, sys=0.81%, ctx=351, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 job5: (groupid=0, jobs=1): err= 0: pid=79282: Mon Dec 16 05:36:26 2024 00:18:48.314 read: IOPS=117, BW=29.4MiB/s (30.8MB/s)(300MiB/10194msec) 00:18:48.314 slat (usec): min=20, max=305388, avg=8384.28, stdev=24230.66 00:18:48.314 clat (msec): min=77, max=812, avg=535.40, stdev=105.78 00:18:48.314 lat (msec): min=78, max=812, avg=543.79, stdev=106.47 00:18:48.314 clat percentiles (msec): 00:18:48.314 | 1.00th=[ 88], 5.00th=[ 414], 10.00th=[ 451], 20.00th=[ 485], 00:18:48.314 | 30.00th=[ 502], 40.00th=[ 523], 50.00th=[ 550], 60.00th=[ 567], 00:18:48.314 | 70.00th=[ 592], 80.00th=[ 617], 90.00th=[ 642], 95.00th=[ 659], 00:18:48.314 | 99.00th=[ 701], 99.50th=[ 701], 99.90th=[ 743], 99.95th=[ 810], 00:18:48.314 | 99.99th=[ 810] 00:18:48.314 bw ( KiB/s): min=20480, max=35328, per=4.67%, avg=29030.40, stdev=4113.14, samples=20 00:18:48.314 iops : min= 80, max= 138, avg=113.40, stdev=16.07, samples=20 00:18:48.314 lat (msec) : 100=2.59%, 250=0.83%, 500=23.21%, 750=73.29%, 1000=0.08% 00:18:48.314 cpu : usr=0.07%, sys=0.57%, ctx=227, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 job6: (groupid=0, jobs=1): err= 0: pid=79283: Mon Dec 16 05:36:26 2024 00:18:48.314 read: IOPS=110, BW=27.6MiB/s (28.9MB/s)(281MiB/10188msec) 00:18:48.314 slat (usec): min=20, max=372201, avg=8898.73, stdev=27485.41 00:18:48.314 clat (msec): min=45, max=741, avg=569.78, stdev=99.93 00:18:48.314 lat (msec): min=46, max=996, avg=578.67, stdev=99.39 00:18:48.314 clat percentiles (msec): 00:18:48.314 | 1.00th=[ 271], 5.00th=[ 439], 10.00th=[ 460], 20.00th=[ 485], 00:18:48.314 | 30.00th=[ 510], 40.00th=[ 542], 50.00th=[ 575], 60.00th=[ 609], 00:18:48.314 | 70.00th=[ 634], 80.00th=[ 659], 90.00th=[ 684], 95.00th=[ 709], 00:18:48.314 | 99.00th=[ 726], 99.50th=[ 743], 99.90th=[ 743], 99.95th=[ 743], 00:18:48.314 | 99.99th=[ 743] 00:18:48.314 bw ( KiB/s): min= 4104, max=35328, per=4.37%, avg=27162.00, stdev=8253.70, samples=20 00:18:48.314 iops : min= 16, max= 138, avg=106.10, stdev=32.25, samples=20 00:18:48.314 lat (msec) : 50=0.44%, 250=0.53%, 500=25.69%, 750=73.33% 00:18:48.314 cpu : usr=0.08%, sys=0.51%, ctx=207, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 job7: (groupid=0, jobs=1): err= 0: pid=79284: Mon Dec 16 05:36:26 2024 00:18:48.314 read: IOPS=171, BW=42.8MiB/s (44.9MB/s)(435MiB/10156msec) 00:18:48.314 slat (usec): min=19, max=111122, avg=5656.61, stdev=13980.59 00:18:48.314 clat (msec): min=101, max=525, avg=367.53, stdev=56.80 00:18:48.314 lat (msec): min=105, max=525, avg=373.19, stdev=57.47 00:18:48.314 clat percentiles (msec): 00:18:48.314 | 1.00th=[ 163], 5.00th=[ 247], 10.00th=[ 284], 20.00th=[ 342], 00:18:48.314 | 30.00th=[ 359], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 388], 00:18:48.314 | 70.00th=[ 397], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 435], 00:18:48.314 | 99.00th=[ 456], 99.50th=[ 489], 99.90th=[ 527], 99.95th=[ 527], 00:18:48.314 | 99.99th=[ 527] 00:18:48.314 bw ( KiB/s): min=38400, max=50176, per=6.90%, avg=42884.45, stdev=2806.57, samples=20 00:18:48.314 iops : min= 150, max= 196, avg=167.50, stdev=10.95, samples=20 00:18:48.314 lat (msec) : 250=5.92%, 500=93.73%, 750=0.35% 00:18:48.314 cpu : usr=0.11%, sys=0.77%, ctx=340, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 job8: (groupid=0, jobs=1): err= 0: pid=79285: Mon Dec 16 05:36:26 2024 00:18:48.314 read: IOPS=172, BW=43.1MiB/s (45.2MB/s)(437MiB/10154msec) 00:18:48.314 slat (usec): min=20, max=141664, avg=5718.98, stdev=14717.39 00:18:48.314 clat (msec): min=75, max=509, avg=365.37, stdev=70.83 00:18:48.314 lat (msec): min=75, max=509, avg=371.09, stdev=71.75 00:18:48.314 clat percentiles (msec): 00:18:48.314 | 1.00th=[ 87], 5.00th=[ 205], 10.00th=[ 296], 20.00th=[ 338], 00:18:48.314 | 30.00th=[ 359], 40.00th=[ 376], 50.00th=[ 384], 60.00th=[ 393], 00:18:48.314 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 430], 95.00th=[ 439], 00:18:48.314 | 99.00th=[ 472], 99.50th=[ 489], 99.90th=[ 510], 99.95th=[ 510], 00:18:48.314 | 99.99th=[ 510] 00:18:48.314 bw ( KiB/s): min=38912, max=64512, per=6.94%, avg=43136.20, stdev=5656.54, samples=20 00:18:48.314 iops : min= 152, max= 252, avg=168.45, stdev=22.11, samples=20 00:18:48.314 lat (msec) : 100=1.66%, 250=5.49%, 500=92.45%, 750=0.40% 00:18:48.314 cpu : usr=0.09%, sys=0.79%, ctx=321, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 job9: (groupid=0, jobs=1): err= 0: pid=79286: Mon Dec 16 05:36:26 2024 00:18:48.314 read: IOPS=120, BW=30.1MiB/s (31.6MB/s)(307MiB/10185msec) 00:18:48.314 slat (usec): min=21, max=404218, avg=8162.34, stdev=23986.12 00:18:48.314 clat (msec): min=130, max=725, avg=522.87, stdev=105.27 00:18:48.314 lat (msec): min=130, max=892, avg=531.04, stdev=106.52 00:18:48.314 clat percentiles (msec): 00:18:48.314 | 1.00th=[ 136], 5.00th=[ 222], 10.00th=[ 447], 20.00th=[ 485], 00:18:48.314 | 30.00th=[ 502], 40.00th=[ 523], 50.00th=[ 542], 60.00th=[ 558], 00:18:48.314 | 70.00th=[ 575], 80.00th=[ 592], 90.00th=[ 617], 95.00th=[ 642], 00:18:48.314 | 99.00th=[ 667], 99.50th=[ 676], 99.90th=[ 726], 99.95th=[ 726], 00:18:48.314 | 99.99th=[ 726] 00:18:48.314 bw ( KiB/s): min=20992, max=36864, per=4.79%, avg=29749.95, stdev=4174.86, samples=20 00:18:48.314 iops : min= 82, max= 144, avg=116.15, stdev=16.29, samples=20 00:18:48.314 lat (msec) : 250=6.20%, 500=22.84%, 750=70.96% 00:18:48.314 cpu : usr=0.09%, sys=0.55%, ctx=225, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 job10: (groupid=0, jobs=1): err= 0: pid=79287: Mon Dec 16 05:36:26 2024 00:18:48.314 read: IOPS=165, BW=41.3MiB/s (43.3MB/s)(420MiB/10160msec) 00:18:48.314 slat (usec): min=20, max=171150, avg=5973.84, stdev=15703.08 00:18:48.314 clat (msec): min=21, max=549, avg=381.04, stdev=71.06 00:18:48.314 lat (msec): min=22, max=549, avg=387.01, stdev=71.45 00:18:48.314 clat percentiles (msec): 00:18:48.314 | 1.00th=[ 53], 5.00th=[ 271], 10.00th=[ 305], 20.00th=[ 347], 00:18:48.314 | 30.00th=[ 368], 40.00th=[ 380], 50.00th=[ 388], 60.00th=[ 397], 00:18:48.314 | 70.00th=[ 409], 80.00th=[ 430], 90.00th=[ 460], 95.00th=[ 481], 00:18:48.314 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 550], 00:18:48.314 | 99.99th=[ 550] 00:18:48.314 bw ( KiB/s): min=32768, max=50276, per=6.65%, avg=41323.40, stdev=3502.21, samples=20 00:18:48.314 iops : min= 128, max= 196, avg=161.40, stdev=13.63, samples=20 00:18:48.314 lat (msec) : 50=0.30%, 100=1.43%, 250=1.73%, 500=95.17%, 750=1.37% 00:18:48.314 cpu : usr=0.10%, sys=0.76%, ctx=321, majf=0, minf=4097 00:18:48.314 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:18:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.314 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.314 issued rwts: total=1678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.314 00:18:48.314 Run status group 0 (all jobs): 00:18:48.314 READ: bw=607MiB/s (636MB/s), 27.6MiB/s-149MiB/s (28.9MB/s-156MB/s), io=6187MiB (6488MB), run=10040-10194msec 00:18:48.314 00:18:48.314 Disk stats (read/write): 00:18:48.314 nvme0n1: ios=11683/0, merge=0/0, ticks=1237348/0, in_queue=1237348, util=97.89% 00:18:48.314 nvme10n1: ios=11813/0, merge=0/0, ticks=1237519/0, in_queue=1237519, util=97.98% 00:18:48.314 nvme1n1: ios=2384/0, merge=0/0, ticks=1213791/0, in_queue=1213791, util=98.16% 00:18:48.314 nvme2n1: ios=2210/0, merge=0/0, ticks=1207766/0, in_queue=1207766, util=98.26% 00:18:48.314 nvme3n1: ios=3371/0, merge=0/0, ticks=1223969/0, in_queue=1223969, util=98.33% 00:18:48.315 nvme4n1: ios=2268/0, merge=0/0, ticks=1211193/0, in_queue=1211193, util=98.57% 00:18:48.315 nvme5n1: ios=2123/0, merge=0/0, ticks=1213102/0, in_queue=1213102, util=98.65% 00:18:48.315 nvme6n1: ios=3350/0, merge=0/0, ticks=1222061/0, in_queue=1222061, util=98.72% 00:18:48.315 nvme7n1: ios=3370/0, merge=0/0, ticks=1221388/0, in_queue=1221388, util=98.96% 00:18:48.315 nvme8n1: ios=2325/0, merge=0/0, ticks=1212696/0, in_queue=1212696, util=99.13% 00:18:48.315 nvme9n1: ios=3232/0, merge=0/0, ticks=1225322/0, in_queue=1225322, util=99.19% 00:18:48.315 05:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:48.315 [global] 00:18:48.315 thread=1 00:18:48.315 invalidate=1 00:18:48.315 rw=randwrite 00:18:48.315 time_based=1 00:18:48.315 runtime=10 00:18:48.315 ioengine=libaio 00:18:48.315 direct=1 00:18:48.315 bs=262144 00:18:48.315 iodepth=64 00:18:48.315 norandommap=1 00:18:48.315 numjobs=1 00:18:48.315 00:18:48.315 [job0] 00:18:48.315 filename=/dev/nvme0n1 00:18:48.315 [job1] 00:18:48.315 filename=/dev/nvme10n1 00:18:48.315 [job2] 00:18:48.315 filename=/dev/nvme1n1 00:18:48.315 [job3] 00:18:48.315 filename=/dev/nvme2n1 00:18:48.315 [job4] 00:18:48.315 filename=/dev/nvme3n1 00:18:48.315 [job5] 00:18:48.315 filename=/dev/nvme4n1 00:18:48.315 [job6] 00:18:48.315 filename=/dev/nvme5n1 00:18:48.315 [job7] 00:18:48.315 filename=/dev/nvme6n1 00:18:48.315 [job8] 00:18:48.315 filename=/dev/nvme7n1 00:18:48.315 [job9] 00:18:48.315 filename=/dev/nvme8n1 00:18:48.315 [job10] 00:18:48.315 filename=/dev/nvme9n1 00:18:48.315 Could not set queue depth (nvme0n1) 00:18:48.315 Could not set queue depth (nvme10n1) 00:18:48.315 Could not set queue depth (nvme1n1) 00:18:48.315 Could not set queue depth (nvme2n1) 00:18:48.315 Could not set queue depth (nvme3n1) 00:18:48.315 Could not set queue depth (nvme4n1) 00:18:48.315 Could not set queue depth (nvme5n1) 00:18:48.315 Could not set queue depth (nvme6n1) 00:18:48.315 Could not set queue depth (nvme7n1) 00:18:48.315 Could not set queue depth (nvme8n1) 00:18:48.315 Could not set queue depth (nvme9n1) 00:18:48.315 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:48.315 fio-3.35 00:18:48.315 Starting 11 threads 00:18:58.295 00:18:58.295 job0: (groupid=0, jobs=1): err= 0: pid=79488: Mon Dec 16 05:36:37 2024 00:18:58.295 write: IOPS=214, BW=53.7MiB/s (56.3MB/s)(548MiB/10209msec); 0 zone resets 00:18:58.295 slat (usec): min=17, max=73345, avg=4556.56, stdev=8112.09 00:18:58.295 clat (msec): min=45, max=502, avg=293.36, stdev=34.37 00:18:58.295 lat (msec): min=45, max=502, avg=297.92, stdev=34.03 00:18:58.295 clat percentiles (msec): 00:18:58.295 | 1.00th=[ 121], 5.00th=[ 275], 10.00th=[ 279], 20.00th=[ 284], 00:18:58.295 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 296], 60.00th=[ 300], 00:18:58.295 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 309], 95.00th=[ 321], 00:18:58.295 | 99.00th=[ 409], 99.50th=[ 447], 99.90th=[ 481], 99.95th=[ 502], 00:18:58.295 | 99.99th=[ 502] 00:18:58.295 bw ( KiB/s): min=49053, max=55406, per=5.85%, avg=54502.95, stdev=1548.26, samples=20 00:18:58.295 iops : min= 191, max= 216, avg=212.85, stdev= 6.15, samples=20 00:18:58.295 lat (msec) : 50=0.18%, 100=0.55%, 250=3.01%, 500=96.17%, 750=0.09% 00:18:58.295 cpu : usr=0.34%, sys=0.65%, ctx=2776, majf=0, minf=1 00:18:58.295 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:18:58.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.295 issued rwts: total=0,2192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.295 job1: (groupid=0, jobs=1): err= 0: pid=79489: Mon Dec 16 05:36:37 2024 00:18:58.295 write: IOPS=885, BW=221MiB/s (232MB/s)(2228MiB/10064msec); 0 zone resets 00:18:58.295 slat (usec): min=15, max=34389, avg=1116.26, stdev=1921.66 00:18:58.295 clat (msec): min=36, max=160, avg=71.14, stdev= 9.97 00:18:58.295 lat (msec): min=36, max=160, avg=72.26, stdev= 9.93 00:18:58.295 clat percentiles (msec): 00:18:58.295 | 1.00th=[ 65], 5.00th=[ 66], 10.00th=[ 66], 20.00th=[ 67], 00:18:58.295 | 30.00th=[ 70], 40.00th=[ 70], 50.00th=[ 70], 60.00th=[ 70], 00:18:58.295 | 70.00th=[ 71], 80.00th=[ 71], 90.00th=[ 72], 95.00th=[ 73], 00:18:58.295 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 153], 00:18:58.295 | 99.99th=[ 161] 00:18:58.295 bw ( KiB/s): min=123639, max=235520, per=24.33%, avg=226521.15, stdev=24694.93, samples=20 00:18:58.295 iops : min= 482, max= 920, avg=884.80, stdev=96.68, samples=20 00:18:58.295 lat (msec) : 50=0.09%, 100=96.32%, 250=3.59% 00:18:58.295 cpu : usr=1.48%, sys=2.56%, ctx=10261, majf=0, minf=1 00:18:58.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:58.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.295 issued rwts: total=0,8911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.295 job2: (groupid=0, jobs=1): err= 0: pid=79495: Mon Dec 16 05:36:37 2024 00:18:58.295 write: IOPS=204, BW=51.0MiB/s (53.5MB/s)(521MiB/10209msec); 0 zone resets 00:18:58.295 slat (usec): min=16, max=189452, avg=4759.48, stdev=9219.89 00:18:58.295 clat (msec): min=180, max=514, avg=308.62, stdev=29.08 00:18:58.295 lat (msec): min=191, max=514, avg=313.38, stdev=28.15 00:18:58.295 clat percentiles (msec): 00:18:58.295 | 1.00th=[ 211], 5.00th=[ 245], 10.00th=[ 292], 20.00th=[ 296], 00:18:58.295 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 313], 60.00th=[ 317], 00:18:58.295 | 70.00th=[ 317], 80.00th=[ 321], 90.00th=[ 326], 95.00th=[ 334], 00:18:58.295 | 99.00th=[ 414], 99.50th=[ 456], 99.90th=[ 493], 99.95th=[ 514], 00:18:58.295 | 99.99th=[ 514] 00:18:58.295 bw ( KiB/s): min=47104, max=53248, per=5.56%, avg=51737.60, stdev=1462.14, samples=20 00:18:58.295 iops : min= 184, max= 208, avg=202.10, stdev= 5.71, samples=20 00:18:58.295 lat (msec) : 250=5.61%, 500=94.29%, 750=0.10% 00:18:58.295 cpu : usr=0.32%, sys=0.70%, ctx=2009, majf=0, minf=1 00:18:58.295 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:18:58.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.295 issued rwts: total=0,2084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.295 job3: (groupid=0, jobs=1): err= 0: pid=79502: Mon Dec 16 05:36:37 2024 00:18:58.295 write: IOPS=536, BW=134MiB/s (141MB/s)(1356MiB/10102msec); 0 zone resets 00:18:58.295 slat (usec): min=18, max=59418, avg=1838.96, stdev=3214.55 00:18:58.295 clat (msec): min=61, max=213, avg=117.36, stdev= 8.75 00:18:58.295 lat (msec): min=61, max=213, avg=119.20, stdev= 8.27 00:18:58.295 clat percentiles (msec): 00:18:58.295 | 1.00th=[ 109], 5.00th=[ 110], 10.00th=[ 111], 20.00th=[ 112], 00:18:58.295 | 30.00th=[ 117], 40.00th=[ 117], 50.00th=[ 117], 60.00th=[ 118], 00:18:58.295 | 70.00th=[ 118], 80.00th=[ 120], 90.00th=[ 120], 95.00th=[ 127], 00:18:58.295 | 99.00th=[ 155], 99.50th=[ 182], 99.90th=[ 207], 99.95th=[ 207], 00:18:58.295 | 99.99th=[ 213] 00:18:58.295 bw ( KiB/s): min=104657, max=141312, per=14.74%, avg=137201.30, stdev=7814.51, samples=20 00:18:58.295 iops : min= 408, max= 552, avg=535.85, stdev=30.72, samples=20 00:18:58.295 lat (msec) : 100=0.39%, 250=99.61% 00:18:58.295 cpu : usr=0.91%, sys=1.54%, ctx=6453, majf=0, minf=1 00:18:58.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:58.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.295 issued rwts: total=0,5422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.295 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.295 job4: (groupid=0, jobs=1): err= 0: pid=79503: Mon Dec 16 05:36:37 2024 00:18:58.295 write: IOPS=214, BW=53.7MiB/s (56.3MB/s)(549MiB/10218msec); 0 zone resets 00:18:58.296 slat (usec): min=16, max=75396, avg=4554.14, stdev=8106.01 00:18:58.296 clat (msec): min=27, max=502, avg=293.22, stdev=35.21 00:18:58.296 lat (msec): min=27, max=502, avg=297.78, stdev=34.94 00:18:58.296 clat percentiles (msec): 00:18:58.296 | 1.00th=[ 106], 5.00th=[ 275], 10.00th=[ 279], 20.00th=[ 284], 00:18:58.296 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 296], 60.00th=[ 300], 00:18:58.296 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 309], 95.00th=[ 321], 00:18:58.296 | 99.00th=[ 409], 99.50th=[ 447], 99.90th=[ 481], 99.95th=[ 502], 00:18:58.296 | 99.99th=[ 502] 00:18:58.296 bw ( KiB/s): min=47104, max=57344, per=5.86%, avg=54584.65, stdev=2189.76, samples=20 00:18:58.296 iops : min= 184, max= 224, avg=213.15, stdev= 8.58, samples=20 00:18:58.296 lat (msec) : 50=0.32%, 100=0.55%, 250=2.41%, 500=96.63%, 750=0.09% 00:18:58.296 cpu : usr=0.43%, sys=0.63%, ctx=2363, majf=0, minf=1 00:18:58.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:18:58.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.296 issued rwts: total=0,2195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.296 job5: (groupid=0, jobs=1): err= 0: pid=79504: Mon Dec 16 05:36:37 2024 00:18:58.296 write: IOPS=206, BW=51.6MiB/s (54.1MB/s)(527MiB/10207msec); 0 zone resets 00:18:58.296 slat (usec): min=15, max=93164, avg=4744.59, stdev=8599.58 00:18:58.296 clat (msec): min=91, max=502, avg=305.00, stdev=37.50 00:18:58.296 lat (msec): min=91, max=502, avg=309.74, stdev=37.27 00:18:58.296 clat percentiles (msec): 00:18:58.296 | 1.00th=[ 155], 5.00th=[ 224], 10.00th=[ 275], 20.00th=[ 296], 00:18:58.296 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 313], 60.00th=[ 317], 00:18:58.296 | 70.00th=[ 321], 80.00th=[ 321], 90.00th=[ 326], 95.00th=[ 330], 00:18:58.296 | 99.00th=[ 405], 99.50th=[ 447], 99.90th=[ 481], 99.95th=[ 502], 00:18:58.296 | 99.99th=[ 502] 00:18:58.296 bw ( KiB/s): min=49250, max=63361, per=5.62%, avg=52319.20, stdev=3039.84, samples=20 00:18:58.296 iops : min= 192, max= 247, avg=204.30, stdev=11.75, samples=20 00:18:58.296 lat (msec) : 100=0.24%, 250=7.54%, 500=92.13%, 750=0.09% 00:18:58.296 cpu : usr=0.46%, sys=0.59%, ctx=1829, majf=0, minf=1 00:18:58.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:18:58.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.296 issued rwts: total=0,2108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.296 job6: (groupid=0, jobs=1): err= 0: pid=79505: Mon Dec 16 05:36:37 2024 00:18:58.296 write: IOPS=539, BW=135MiB/s (141MB/s)(1362MiB/10105msec); 0 zone resets 00:18:58.296 slat (usec): min=16, max=13990, avg=1829.59, stdev=3122.11 00:18:58.296 clat (msec): min=16, max=216, avg=116.82, stdev= 9.98 00:18:58.296 lat (msec): min=16, max=216, avg=118.65, stdev= 9.62 00:18:58.296 clat percentiles (msec): 00:18:58.296 | 1.00th=[ 108], 5.00th=[ 110], 10.00th=[ 111], 20.00th=[ 112], 00:18:58.296 | 30.00th=[ 117], 40.00th=[ 117], 50.00th=[ 117], 60.00th=[ 118], 00:18:58.296 | 70.00th=[ 118], 80.00th=[ 120], 90.00th=[ 120], 95.00th=[ 128], 00:18:58.296 | 99.00th=[ 146], 99.50th=[ 165], 99.90th=[ 209], 99.95th=[ 209], 00:18:58.296 | 99.99th=[ 218] 00:18:58.296 bw ( KiB/s): min=118784, max=141312, per=14.81%, avg=137923.50, stdev=4726.09, samples=20 00:18:58.296 iops : min= 464, max= 552, avg=538.75, stdev=18.46, samples=20 00:18:58.296 lat (msec) : 20=0.07%, 50=0.37%, 100=0.44%, 250=99.12% 00:18:58.296 cpu : usr=1.00%, sys=1.60%, ctx=6641, majf=0, minf=1 00:18:58.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:58.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.296 issued rwts: total=0,5449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.296 job7: (groupid=0, jobs=1): err= 0: pid=79506: Mon Dec 16 05:36:37 2024 00:18:58.296 write: IOPS=207, BW=51.9MiB/s (54.4MB/s)(529MiB/10206msec); 0 zone resets 00:18:58.296 slat (usec): min=16, max=89593, avg=4698.70, stdev=8567.33 00:18:58.296 clat (msec): min=21, max=515, avg=303.69, stdev=48.49 00:18:58.296 lat (msec): min=21, max=515, avg=308.39, stdev=48.59 00:18:58.296 clat percentiles (msec): 00:18:58.296 | 1.00th=[ 61], 5.00th=[ 226], 10.00th=[ 271], 20.00th=[ 296], 00:18:58.296 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 317], 00:18:58.296 | 70.00th=[ 321], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 338], 00:18:58.296 | 99.00th=[ 414], 99.50th=[ 456], 99.90th=[ 493], 99.95th=[ 514], 00:18:58.296 | 99.99th=[ 514] 00:18:58.296 bw ( KiB/s): min=49152, max=71823, per=5.65%, avg=52604.85, stdev=4642.26, samples=20 00:18:58.296 iops : min= 192, max= 280, avg=205.40, stdev=18.03, samples=20 00:18:58.296 lat (msec) : 50=0.76%, 100=1.13%, 250=5.81%, 500=92.21%, 750=0.09% 00:18:58.296 cpu : usr=0.41%, sys=0.57%, ctx=1302, majf=0, minf=1 00:18:58.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:18:58.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.296 issued rwts: total=0,2117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.296 job8: (groupid=0, jobs=1): err= 0: pid=79507: Mon Dec 16 05:36:37 2024 00:18:58.296 write: IOPS=215, BW=54.0MiB/s (56.6MB/s)(552MiB/10225msec); 0 zone resets 00:18:58.296 slat (usec): min=19, max=39844, avg=4468.64, stdev=7957.33 00:18:58.296 clat (msec): min=30, max=501, avg=291.75, stdev=36.47 00:18:58.296 lat (msec): min=30, max=501, avg=296.22, stdev=36.23 00:18:58.296 clat percentiles (msec): 00:18:58.296 | 1.00th=[ 107], 5.00th=[ 243], 10.00th=[ 275], 20.00th=[ 284], 00:18:58.296 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 296], 60.00th=[ 300], 00:18:58.296 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 309], 95.00th=[ 321], 00:18:58.296 | 99.00th=[ 393], 99.50th=[ 447], 99.90th=[ 481], 99.95th=[ 502], 00:18:58.296 | 99.99th=[ 502] 00:18:58.296 bw ( KiB/s): min=47104, max=63361, per=5.89%, avg=54880.05, stdev=2767.01, samples=20 00:18:58.296 iops : min= 184, max= 247, avg=214.35, stdev=10.73, samples=20 00:18:58.296 lat (msec) : 50=0.36%, 100=0.54%, 250=4.62%, 500=94.38%, 750=0.09% 00:18:58.296 cpu : usr=0.45%, sys=0.67%, ctx=2403, majf=0, minf=1 00:18:58.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.1% 00:18:58.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.296 issued rwts: total=0,2208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.296 job9: (groupid=0, jobs=1): err= 0: pid=79508: Mon Dec 16 05:36:37 2024 00:18:58.296 write: IOPS=215, BW=53.8MiB/s (56.4MB/s)(550MiB/10218msec); 0 zone resets 00:18:58.296 slat (usec): min=14, max=124873, avg=4398.14, stdev=8344.04 00:18:58.296 clat (msec): min=126, max=494, avg=292.87, stdev=32.11 00:18:58.296 lat (msec): min=126, max=494, avg=297.26, stdev=31.89 00:18:58.296 clat percentiles (msec): 00:18:58.296 | 1.00th=[ 180], 5.00th=[ 239], 10.00th=[ 275], 20.00th=[ 284], 00:18:58.296 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 296], 60.00th=[ 300], 00:18:58.296 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 309], 95.00th=[ 334], 00:18:58.296 | 99.00th=[ 401], 99.50th=[ 439], 99.90th=[ 477], 99.95th=[ 493], 00:18:58.296 | 99.99th=[ 493] 00:18:58.296 bw ( KiB/s): min=38912, max=66938, per=5.87%, avg=54654.80, stdev=4659.67, samples=20 00:18:58.296 iops : min= 152, max= 261, avg=213.45, stdev=18.13, samples=20 00:18:58.296 lat (msec) : 250=5.46%, 500=94.54% 00:18:58.296 cpu : usr=0.37%, sys=0.63%, ctx=3584, majf=0, minf=1 00:18:58.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:18:58.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.296 issued rwts: total=0,2199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.296 job10: (groupid=0, jobs=1): err= 0: pid=79509: Mon Dec 16 05:36:37 2024 00:18:58.296 write: IOPS=225, BW=56.4MiB/s (59.1MB/s)(576MiB/10212msec); 0 zone resets 00:18:58.296 slat (usec): min=14, max=33773, avg=4294.44, stdev=7897.78 00:18:58.296 clat (msec): min=16, max=521, avg=279.24, stdev=75.04 00:18:58.296 lat (msec): min=16, max=521, avg=283.54, stdev=75.88 00:18:58.296 clat percentiles (msec): 00:18:58.296 | 1.00th=[ 53], 5.00th=[ 115], 10.00th=[ 122], 20.00th=[ 284], 00:18:58.296 | 30.00th=[ 296], 40.00th=[ 300], 50.00th=[ 309], 60.00th=[ 313], 00:18:58.296 | 70.00th=[ 317], 80.00th=[ 317], 90.00th=[ 321], 95.00th=[ 321], 00:18:58.296 | 99.00th=[ 401], 99.50th=[ 460], 99.90th=[ 502], 99.95th=[ 523], 00:18:58.296 | 99.99th=[ 523] 00:18:58.296 bw ( KiB/s): min=51097, max=129795, per=6.16%, avg=57387.60, stdev=18132.07, samples=20 00:18:58.296 iops : min= 199, max= 507, avg=224.10, stdev=70.85, samples=20 00:18:58.296 lat (msec) : 20=0.17%, 50=0.74%, 100=1.04%, 250=14.76%, 500=83.20% 00:18:58.296 lat (msec) : 750=0.09% 00:18:58.296 cpu : usr=0.41%, sys=0.67%, ctx=2418, majf=0, minf=2 00:18:58.296 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:18:58.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:58.296 issued rwts: total=0,2304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:58.296 00:18:58.296 Run status group 0 (all jobs): 00:18:58.297 WRITE: bw=909MiB/s (953MB/s), 51.0MiB/s-221MiB/s (53.5MB/s-232MB/s), io=9297MiB (9749MB), run=10064-10225msec 00:18:58.297 00:18:58.297 Disk stats (read/write): 00:18:58.297 nvme0n1: ios=50/4265, merge=0/0, ticks=51/1206396, in_queue=1206447, util=98.23% 00:18:58.297 nvme10n1: ios=49/17695, merge=0/0, ticks=42/1217766, in_queue=1217808, util=98.19% 00:18:58.297 nvme1n1: ios=49/4047, merge=0/0, ticks=66/1207058, in_queue=1207124, util=98.58% 00:18:58.297 nvme2n1: ios=44/10721, merge=0/0, ticks=54/1216336, in_queue=1216390, util=98.48% 00:18:58.297 nvme3n1: ios=38/4271, merge=0/0, ticks=62/1207902, in_queue=1207964, util=98.61% 00:18:58.297 nvme4n1: ios=24/4090, merge=0/0, ticks=33/1205891, in_queue=1205924, util=98.58% 00:18:58.297 nvme5n1: ios=0/10781, merge=0/0, ticks=0/1217181, in_queue=1217181, util=98.59% 00:18:58.297 nvme6n1: ios=0/4115, merge=0/0, ticks=0/1206138, in_queue=1206138, util=98.64% 00:18:58.297 nvme7n1: ios=0/4296, merge=0/0, ticks=0/1208869, in_queue=1208869, util=98.94% 00:18:58.297 nvme8n1: ios=0/4272, merge=0/0, ticks=0/1208444, in_queue=1208444, util=98.92% 00:18:58.297 nvme9n1: ios=0/4490, merge=0/0, ticks=0/1207412, in_queue=1207412, util=99.05% 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:58.297 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:58.297 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:58.297 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:58.297 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:58.297 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.297 05:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.297 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.297 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.297 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:58.297 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:58.298 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:58.298 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:58.298 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:58.298 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:58.298 rmmod nvme_tcp 00:18:58.298 rmmod nvme_fabrics 00:18:58.298 rmmod nvme_keyring 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 78816 ']' 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 78816 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 78816 ']' 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 78816 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.298 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78816 00:18:58.558 killing process with pid 78816 00:18:58.558 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.558 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.558 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78816' 00:18:58.558 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 78816 00:18:58.558 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 78816 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:19:01.094 00:19:01.094 real 0m52.187s 00:19:01.094 user 2m56.479s 00:19:01.094 sys 0m27.962s 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:01.094 ************************************ 00:19:01.094 END TEST nvmf_multiconnection 00:19:01.094 ************************************ 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:01.094 ************************************ 00:19:01.094 START TEST nvmf_initiator_timeout 00:19:01.094 ************************************ 00:19:01.094 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:01.355 * Looking for test storage... 00:19:01.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.355 --rc genhtml_branch_coverage=1 00:19:01.355 --rc genhtml_function_coverage=1 00:19:01.355 --rc genhtml_legend=1 00:19:01.355 --rc geninfo_all_blocks=1 00:19:01.355 --rc geninfo_unexecuted_blocks=1 00:19:01.355 00:19:01.355 ' 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.355 --rc genhtml_branch_coverage=1 00:19:01.355 --rc genhtml_function_coverage=1 00:19:01.355 --rc genhtml_legend=1 00:19:01.355 --rc geninfo_all_blocks=1 00:19:01.355 --rc geninfo_unexecuted_blocks=1 00:19:01.355 00:19:01.355 ' 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.355 --rc genhtml_branch_coverage=1 00:19:01.355 --rc genhtml_function_coverage=1 00:19:01.355 --rc genhtml_legend=1 00:19:01.355 --rc geninfo_all_blocks=1 00:19:01.355 --rc geninfo_unexecuted_blocks=1 00:19:01.355 00:19:01.355 ' 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:01.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.355 --rc genhtml_branch_coverage=1 00:19:01.355 --rc genhtml_function_coverage=1 00:19:01.355 --rc genhtml_legend=1 00:19:01.355 --rc geninfo_all_blocks=1 00:19:01.355 --rc geninfo_unexecuted_blocks=1 00:19:01.355 00:19:01.355 ' 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.355 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:01.356 Cannot find device "nvmf_init_br" 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:01.356 Cannot find device "nvmf_init_br2" 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:01.356 Cannot find device "nvmf_tgt_br" 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:01.356 Cannot find device "nvmf_tgt_br2" 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:01.356 Cannot find device "nvmf_init_br" 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:19:01.356 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:01.616 Cannot find device "nvmf_init_br2" 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:01.616 Cannot find device "nvmf_tgt_br" 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:01.616 Cannot find device "nvmf_tgt_br2" 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:01.616 Cannot find device "nvmf_br" 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:01.616 Cannot find device "nvmf_init_if" 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:01.616 Cannot find device "nvmf_init_if2" 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:01.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:01.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:01.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:01.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:01.616 00:19:01.616 --- 10.0.0.3 ping statistics --- 00:19:01.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.616 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:01.616 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:01.876 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:01.876 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:19:01.876 00:19:01.876 --- 10.0.0.4 ping statistics --- 00:19:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.876 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:01.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:01.876 00:19:01.876 --- 10.0.0.1 ping statistics --- 00:19:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.876 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:01.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:19:01.876 00:19:01.876 --- 10.0.0.2 ping statistics --- 00:19:01.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.876 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=79950 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 79950 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 79950 ']' 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:01.876 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.877 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.877 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.877 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:01.877 [2024-12-16 05:36:42.014942] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:01.877 [2024-12-16 05:36:42.015136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.136 [2024-12-16 05:36:42.185935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:02.136 [2024-12-16 05:36:42.278327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.136 [2024-12-16 05:36:42.278435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.136 [2024-12-16 05:36:42.278455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.136 [2024-12-16 05:36:42.278468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.136 [2024-12-16 05:36:42.278482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.136 [2024-12-16 05:36:42.280350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.136 [2024-12-16 05:36:42.280883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.136 [2024-12-16 05:36:42.281205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.136 [2024-12-16 05:36:42.281736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.395 [2024-12-16 05:36:42.447928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.964 05:36:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.964 Malloc0 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.964 Delay0 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.964 [2024-12-16 05:36:43.068951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.964 [2024-12-16 05:36:43.101251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.964 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:03.224 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.224 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:19:03.224 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.224 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:03.224 05:36:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=80016 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:05.128 05:36:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:05.128 [global] 00:19:05.128 thread=1 00:19:05.128 invalidate=1 00:19:05.128 rw=write 00:19:05.128 time_based=1 00:19:05.128 runtime=60 00:19:05.128 ioengine=libaio 00:19:05.128 direct=1 00:19:05.128 bs=4096 00:19:05.128 iodepth=1 00:19:05.128 norandommap=0 00:19:05.128 numjobs=1 00:19:05.128 00:19:05.128 verify_dump=1 00:19:05.128 verify_backlog=512 00:19:05.128 verify_state_save=0 00:19:05.128 do_verify=1 00:19:05.128 verify=crc32c-intel 00:19:05.128 [job0] 00:19:05.128 filename=/dev/nvme0n1 00:19:05.128 Could not set queue depth (nvme0n1) 00:19:05.386 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.386 fio-3.35 00:19:05.386 Starting 1 thread 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.670 true 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.670 true 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.670 true 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.670 true 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.670 05:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:11.205 true 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:11.205 true 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:11.205 true 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:11.205 true 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:11.205 05:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 80016 00:20:07.469 00:20:07.469 job0: (groupid=0, jobs=1): err= 0: pid=80037: Mon Dec 16 05:37:45 2024 00:20:07.469 read: IOPS=707, BW=2829KiB/s (2897kB/s)(166MiB/60000msec) 00:20:07.469 slat (nsec): min=10686, max=91177, avg=13995.94, stdev=4798.67 00:20:07.469 clat (usec): min=190, max=2523, avg=237.79, stdev=25.99 00:20:07.469 lat (usec): min=201, max=2560, avg=251.78, stdev=27.18 00:20:07.470 clat percentiles (usec): 00:20:07.470 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:20:07.470 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:20:07.470 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 277], 00:20:07.470 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 347], 99.95th=[ 359], 00:20:07.470 | 99.99th=[ 570] 00:20:07.470 write: IOPS=708, BW=2833KiB/s (2901kB/s)(166MiB/60000msec); 0 zone resets 00:20:07.470 slat (usec): min=12, max=13223, avg=20.99, stdev=74.58 00:20:07.470 clat (usec): min=138, max=40650k, avg=1136.36, stdev=197191.26 00:20:07.470 lat (usec): min=154, max=40650k, avg=1157.35, stdev=197191.26 00:20:07.470 clat percentiles (usec): 00:20:07.470 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:20:07.470 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:20:07.470 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 223], 00:20:07.470 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 281], 99.95th=[ 293], 00:20:07.470 | 99.99th=[ 783] 00:20:07.470 bw ( KiB/s): min= 2360, max=10208, per=100.00%, avg=8514.87, stdev=1312.42, samples=39 00:20:07.470 iops : min= 590, max= 2552, avg=2128.72, stdev=328.11, samples=39 00:20:07.470 lat (usec) : 250=86.55%, 500=13.43%, 750=0.01%, 1000=0.01% 00:20:07.470 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:07.470 cpu : usr=0.52%, sys=1.94%, ctx=84944, majf=0, minf=5 00:20:07.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.470 issued rwts: total=42442,42496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:07.470 00:20:07.470 Run status group 0 (all jobs): 00:20:07.470 READ: bw=2829KiB/s (2897kB/s), 2829KiB/s-2829KiB/s (2897kB/s-2897kB/s), io=166MiB (174MB), run=60000-60000msec 00:20:07.470 WRITE: bw=2833KiB/s (2901kB/s), 2833KiB/s-2833KiB/s (2901kB/s-2901kB/s), io=166MiB (174MB), run=60000-60000msec 00:20:07.470 00:20:07.470 Disk stats (read/write): 00:20:07.470 nvme0n1: ios=42330/42496, merge=0/0, ticks=10615/8280, in_queue=18895, util=99.91% 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:07.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:07.470 nvmf hotplug test: fio successful as expected 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:07.470 rmmod nvme_tcp 00:20:07.470 rmmod nvme_fabrics 00:20:07.470 rmmod nvme_keyring 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 79950 ']' 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 79950 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 79950 ']' 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 79950 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79950 00:20:07.470 killing process with pid 79950 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79950' 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 79950 00:20:07.470 05:37:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 79950 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:07.470 05:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:20:07.470 00:20:07.470 real 1m5.773s 00:20:07.470 user 3m56.113s 00:20:07.470 sys 0m21.096s 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:07.470 ************************************ 00:20:07.470 END TEST nvmf_initiator_timeout 00:20:07.470 ************************************ 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.470 ************************************ 00:20:07.470 START TEST nvmf_nsid 00:20:07.470 ************************************ 00:20:07.470 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:07.470 * Looking for test storage... 00:20:07.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:07.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.471 --rc genhtml_branch_coverage=1 00:20:07.471 --rc genhtml_function_coverage=1 00:20:07.471 --rc genhtml_legend=1 00:20:07.471 --rc geninfo_all_blocks=1 00:20:07.471 --rc geninfo_unexecuted_blocks=1 00:20:07.471 00:20:07.471 ' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:07.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.471 --rc genhtml_branch_coverage=1 00:20:07.471 --rc genhtml_function_coverage=1 00:20:07.471 --rc genhtml_legend=1 00:20:07.471 --rc geninfo_all_blocks=1 00:20:07.471 --rc geninfo_unexecuted_blocks=1 00:20:07.471 00:20:07.471 ' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:07.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.471 --rc genhtml_branch_coverage=1 00:20:07.471 --rc genhtml_function_coverage=1 00:20:07.471 --rc genhtml_legend=1 00:20:07.471 --rc geninfo_all_blocks=1 00:20:07.471 --rc geninfo_unexecuted_blocks=1 00:20:07.471 00:20:07.471 ' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:07.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.471 --rc genhtml_branch_coverage=1 00:20:07.471 --rc genhtml_function_coverage=1 00:20:07.471 --rc genhtml_legend=1 00:20:07.471 --rc geninfo_all_blocks=1 00:20:07.471 --rc geninfo_unexecuted_blocks=1 00:20:07.471 00:20:07.471 ' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.471 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.471 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:07.472 Cannot find device "nvmf_init_br" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:07.472 Cannot find device "nvmf_init_br2" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:07.472 Cannot find device "nvmf_tgt_br" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.472 Cannot find device "nvmf_tgt_br2" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:07.472 Cannot find device "nvmf_init_br" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:07.472 Cannot find device "nvmf_init_br2" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:07.472 Cannot find device "nvmf_tgt_br" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:07.472 Cannot find device "nvmf_tgt_br2" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:07.472 Cannot find device "nvmf_br" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:07.472 Cannot find device "nvmf_init_if" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:07.472 Cannot find device "nvmf_init_if2" 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.472 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.731 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.731 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:07.731 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:07.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:20:07.732 00:20:07.732 --- 10.0.0.3 ping statistics --- 00:20:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.732 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:07.732 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:07.732 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:20:07.732 00:20:07.732 --- 10.0.0.4 ping statistics --- 00:20:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.732 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:07.732 00:20:07.732 --- 10.0.0.1 ping statistics --- 00:20:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.732 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:07.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:07.732 00:20:07.732 --- 10.0.0.2 ping statistics --- 00:20:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.732 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=80907 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 80907 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 80907 ']' 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.732 05:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:07.732 [2024-12-16 05:37:47.915080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:07.732 [2024-12-16 05:37:47.915236] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.991 [2024-12-16 05:37:48.104106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.991 [2024-12-16 05:37:48.230981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.991 [2024-12-16 05:37:48.231051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.991 [2024-12-16 05:37:48.231075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.991 [2024-12-16 05:37:48.231104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.991 [2024-12-16 05:37:48.231121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.991 [2024-12-16 05:37:48.232549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.250 [2024-12-16 05:37:48.437352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=80939 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=15297834-fe8e-4ebf-8561-dbd3d735748e 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ed98217e-0111-4e5a-93cf-5064b96578f6 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4ac9b8ae-6339-4fa2-a3f2-e370904c1d60 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.818 05:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:08.818 null0 00:20:08.818 null1 00:20:08.818 null2 00:20:08.818 [2024-12-16 05:37:49.018344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.818 [2024-12-16 05:37:49.042636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 80939 /var/tmp/tgt2.sock 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 80939 ']' 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.077 05:37:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:09.077 [2024-12-16 05:37:49.106816] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:09.077 [2024-12-16 05:37:49.106974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80939 ] 00:20:09.077 [2024-12-16 05:37:49.296241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.337 [2024-12-16 05:37:49.444129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.596 [2024-12-16 05:37:49.684802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.193 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.193 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:10.193 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:10.452 [2024-12-16 05:37:50.587605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.452 [2024-12-16 05:37:50.603800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:10.452 nvme0n1 nvme0n2 00:20:10.452 nvme1n1 00:20:10.452 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:10.452 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:10.452 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:10.710 05:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 15297834-fe8e-4ebf-8561-dbd3d735748e 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=15297834fe8e4ebf8561dbd3d735748e 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 15297834FE8E4EBF8561DBD3D735748E 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 15297834FE8E4EBF8561DBD3D735748E == \1\5\2\9\7\8\3\4\F\E\8\E\4\E\B\F\8\5\6\1\D\B\D\3\D\7\3\5\7\4\8\E ]] 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:11.647 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ed98217e-0111-4e5a-93cf-5064b96578f6 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ed98217e01114e5a93cf5064b96578f6 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ED98217E01114E5A93CF5064B96578F6 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ ED98217E01114E5A93CF5064B96578F6 == \E\D\9\8\2\1\7\E\0\1\1\1\4\E\5\A\9\3\C\F\5\0\6\4\B\9\6\5\7\8\F\6 ]] 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:11.906 05:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4ac9b8ae-6339-4fa2-a3f2-e370904c1d60 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4ac9b8ae63394fa2a3f2e370904c1d60 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4AC9B8AE63394FA2A3F2E370904C1D60 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4AC9B8AE63394FA2A3F2E370904C1D60 == \4\A\C\9\B\8\A\E\6\3\3\9\4\F\A\2\A\3\F\2\E\3\7\0\9\0\4\C\1\D\6\0 ]] 00:20:11.906 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 80939 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 80939 ']' 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 80939 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80939 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.166 killing process with pid 80939 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80939' 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 80939 00:20:12.166 05:37:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 80939 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.072 rmmod nvme_tcp 00:20:14.072 rmmod nvme_fabrics 00:20:14.072 rmmod nvme_keyring 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 80907 ']' 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 80907 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 80907 ']' 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 80907 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80907 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.072 killing process with pid 80907 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80907' 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 80907 00:20:14.072 05:37:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 80907 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:15.012 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:20:15.013 00:20:15.013 real 0m8.108s 00:20:15.013 user 0m12.621s 00:20:15.013 sys 0m1.958s 00:20:15.013 ************************************ 00:20:15.013 END TEST nvmf_nsid 00:20:15.013 ************************************ 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.013 05:37:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:15.272 05:37:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:15.272 00:20:15.272 real 7m44.185s 00:20:15.272 user 18m45.089s 00:20:15.272 sys 1m56.358s 00:20:15.272 05:37:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.272 ************************************ 00:20:15.272 END TEST nvmf_target_extra 00:20:15.272 05:37:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.272 ************************************ 00:20:15.272 05:37:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:15.272 05:37:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:15.272 05:37:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.272 05:37:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.272 ************************************ 00:20:15.272 START TEST nvmf_host 00:20:15.272 ************************************ 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:15.272 * Looking for test storage... 00:20:15.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.272 --rc genhtml_branch_coverage=1 00:20:15.272 --rc genhtml_function_coverage=1 00:20:15.272 --rc genhtml_legend=1 00:20:15.272 --rc geninfo_all_blocks=1 00:20:15.272 --rc geninfo_unexecuted_blocks=1 00:20:15.272 00:20:15.272 ' 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.272 --rc genhtml_branch_coverage=1 00:20:15.272 --rc genhtml_function_coverage=1 00:20:15.272 --rc genhtml_legend=1 00:20:15.272 --rc geninfo_all_blocks=1 00:20:15.272 --rc geninfo_unexecuted_blocks=1 00:20:15.272 00:20:15.272 ' 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.272 --rc genhtml_branch_coverage=1 00:20:15.272 --rc genhtml_function_coverage=1 00:20:15.272 --rc genhtml_legend=1 00:20:15.272 --rc geninfo_all_blocks=1 00:20:15.272 --rc geninfo_unexecuted_blocks=1 00:20:15.272 00:20:15.272 ' 00:20:15.272 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:15.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.272 --rc genhtml_branch_coverage=1 00:20:15.272 --rc genhtml_function_coverage=1 00:20:15.272 --rc genhtml_legend=1 00:20:15.272 --rc geninfo_all_blocks=1 00:20:15.272 --rc geninfo_unexecuted_blocks=1 00:20:15.272 00:20:15.272 ' 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:15.273 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.273 05:37:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.532 ************************************ 00:20:15.532 START TEST nvmf_identify 00:20:15.532 ************************************ 00:20:15.532 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:15.532 * Looking for test storage... 00:20:15.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.532 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:15.532 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:15.532 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:20:15.532 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:15.532 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:15.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.533 --rc genhtml_branch_coverage=1 00:20:15.533 --rc genhtml_function_coverage=1 00:20:15.533 --rc genhtml_legend=1 00:20:15.533 --rc geninfo_all_blocks=1 00:20:15.533 --rc geninfo_unexecuted_blocks=1 00:20:15.533 00:20:15.533 ' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:15.533 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:15.533 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:15.534 Cannot find device "nvmf_init_br" 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:15.534 Cannot find device "nvmf_init_br2" 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:15.534 Cannot find device "nvmf_tgt_br" 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.534 Cannot find device "nvmf_tgt_br2" 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:15.534 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:15.793 Cannot find device "nvmf_init_br" 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:15.793 Cannot find device "nvmf_init_br2" 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:15.793 Cannot find device "nvmf_tgt_br" 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:15.793 Cannot find device "nvmf_tgt_br2" 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:15.793 Cannot find device "nvmf_br" 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:15.793 Cannot find device "nvmf_init_if" 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:15.793 Cannot find device "nvmf_init_if2" 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:15.793 05:37:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.793 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:16.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:16.053 00:20:16.053 --- 10.0.0.3 ping statistics --- 00:20:16.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.053 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:16.053 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:16.053 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:20:16.053 00:20:16.053 --- 10.0.0.4 ping statistics --- 00:20:16.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.053 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:16.053 00:20:16.053 --- 10.0.0.1 ping statistics --- 00:20:16.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.053 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:16.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:20:16.053 00:20:16.053 --- 10.0.0.2 ping statistics --- 00:20:16.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.053 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=81326 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 81326 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 81326 ']' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:16.053 05:37:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-12-16 05:37:56.277603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:16.053 [2024-12-16 05:37:56.277804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.312 [2024-12-16 05:37:56.469530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.571 [2024-12-16 05:37:56.604337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.571 [2024-12-16 05:37:56.604421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.571 [2024-12-16 05:37:56.604458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.571 [2024-12-16 05:37:56.604483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.571 [2024-12-16 05:37:56.604509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.571 [2024-12-16 05:37:56.606798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.571 [2024-12-16 05:37:56.606965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.571 [2024-12-16 05:37:56.607135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.571 [2024-12-16 05:37:56.607505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.571 [2024-12-16 05:37:56.804872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 [2024-12-16 05:37:57.199033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 Malloc0 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 [2024-12-16 05:37:57.355511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.138 [ 00:20:17.138 { 00:20:17.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.138 "subtype": "Discovery", 00:20:17.138 "listen_addresses": [ 00:20:17.138 { 00:20:17.138 "trtype": "TCP", 00:20:17.138 "adrfam": "IPv4", 00:20:17.138 "traddr": "10.0.0.3", 00:20:17.138 "trsvcid": "4420" 00:20:17.138 } 00:20:17.138 ], 00:20:17.138 "allow_any_host": true, 00:20:17.138 "hosts": [] 00:20:17.138 }, 00:20:17.138 { 00:20:17.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.138 "subtype": "NVMe", 00:20:17.138 "listen_addresses": [ 00:20:17.138 { 00:20:17.138 "trtype": "TCP", 00:20:17.138 "adrfam": "IPv4", 00:20:17.138 "traddr": "10.0.0.3", 00:20:17.138 "trsvcid": "4420" 00:20:17.138 } 00:20:17.138 ], 00:20:17.138 "allow_any_host": true, 00:20:17.138 "hosts": [], 00:20:17.138 "serial_number": "SPDK00000000000001", 00:20:17.138 "model_number": "SPDK bdev Controller", 00:20:17.138 "max_namespaces": 32, 00:20:17.138 "min_cntlid": 1, 00:20:17.138 "max_cntlid": 65519, 00:20:17.138 "namespaces": [ 00:20:17.138 { 00:20:17.138 "nsid": 1, 00:20:17.138 "bdev_name": "Malloc0", 00:20:17.138 "name": "Malloc0", 00:20:17.138 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:17.138 "eui64": "ABCDEF0123456789", 00:20:17.138 "uuid": "261745ff-e76d-4df3-ac28-a194a8c2cd61" 00:20:17.138 } 00:20:17.138 ] 00:20:17.138 } 00:20:17.138 ] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.138 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:17.397 [2024-12-16 05:37:57.438704] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:17.397 [2024-12-16 05:37:57.438860] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81361 ] 00:20:17.397 [2024-12-16 05:37:57.637789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:17.397 [2024-12-16 05:37:57.637938] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:17.397 [2024-12-16 05:37:57.637955] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:17.397 [2024-12-16 05:37:57.637980] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:17.397 [2024-12-16 05:37:57.637996] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:17.397 [2024-12-16 05:37:57.638428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:17.397 [2024-12-16 05:37:57.638513] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:17.397 [2024-12-16 05:37:57.644662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:17.397 [2024-12-16 05:37:57.644718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:17.397 [2024-12-16 05:37:57.644729] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:17.397 [2024-12-16 05:37:57.644738] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:17.397 [2024-12-16 05:37:57.644821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.397 [2024-12-16 05:37:57.644840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.397 [2024-12-16 05:37:57.644849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.397 [2024-12-16 05:37:57.644882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:17.397 [2024-12-16 05:37:57.644929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.659 [2024-12-16 05:37:57.655707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.659 [2024-12-16 05:37:57.655742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.659 [2024-12-16 05:37:57.655752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.659 [2024-12-16 05:37:57.655761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.659 [2024-12-16 05:37:57.655805] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:17.659 [2024-12-16 05:37:57.655822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:17.659 [2024-12-16 05:37:57.655842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:17.659 [2024-12-16 05:37:57.655900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.659 [2024-12-16 05:37:57.655910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.659 [2024-12-16 05:37:57.655918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.659 [2024-12-16 05:37:57.655936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.659 [2024-12-16 05:37:57.655977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.659 [2024-12-16 05:37:57.656070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.659 [2024-12-16 05:37:57.656087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.659 [2024-12-16 05:37:57.656094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.659 [2024-12-16 05:37:57.656103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.659 [2024-12-16 05:37:57.656115] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:17.659 [2024-12-16 05:37:57.656129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:17.659 [2024-12-16 05:37:57.656151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.659 [2024-12-16 05:37:57.656161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.659 [2024-12-16 05:37:57.656170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.656190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.660 [2024-12-16 05:37:57.656254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.660 [2024-12-16 05:37:57.656328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.660 [2024-12-16 05:37:57.656339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.660 [2024-12-16 05:37:57.656345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.660 [2024-12-16 05:37:57.656366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:17.660 [2024-12-16 05:37:57.656381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:17.660 [2024-12-16 05:37:57.656394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.656423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.660 [2024-12-16 05:37:57.656462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.660 [2024-12-16 05:37:57.656531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.660 [2024-12-16 05:37:57.656543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.660 [2024-12-16 05:37:57.656549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.660 [2024-12-16 05:37:57.656566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:17.660 [2024-12-16 05:37:57.656583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.656617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.660 [2024-12-16 05:37:57.656645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.660 [2024-12-16 05:37:57.656729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.660 [2024-12-16 05:37:57.656743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.660 [2024-12-16 05:37:57.656749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.660 [2024-12-16 05:37:57.656766] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:17.660 [2024-12-16 05:37:57.656776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:17.660 [2024-12-16 05:37:57.656790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:17.660 [2024-12-16 05:37:57.656900] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:17.660 [2024-12-16 05:37:57.656909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:17.660 [2024-12-16 05:37:57.656925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.656952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.656967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.660 [2024-12-16 05:37:57.657000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.660 [2024-12-16 05:37:57.657070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.660 [2024-12-16 05:37:57.657082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.660 [2024-12-16 05:37:57.657089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.660 [2024-12-16 05:37:57.657113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:17.660 [2024-12-16 05:37:57.657132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.657163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.660 [2024-12-16 05:37:57.657191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.660 [2024-12-16 05:37:57.657275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.660 [2024-12-16 05:37:57.657302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.660 [2024-12-16 05:37:57.657309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.660 [2024-12-16 05:37:57.657326] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:17.660 [2024-12-16 05:37:57.657355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:17.660 [2024-12-16 05:37:57.657381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:17.660 [2024-12-16 05:37:57.657401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:17.660 [2024-12-16 05:37:57.657423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.657448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.660 [2024-12-16 05:37:57.657484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.660 [2024-12-16 05:37:57.657678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.660 [2024-12-16 05:37:57.657700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.660 [2024-12-16 05:37:57.657708] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657716] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:17.660 [2024-12-16 05:37:57.657726] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:17.660 [2024-12-16 05:37:57.657735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657759] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657768] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.660 [2024-12-16 05:37:57.657793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.660 [2024-12-16 05:37:57.657804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.660 [2024-12-16 05:37:57.657832] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:17.660 [2024-12-16 05:37:57.657843] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:17.660 [2024-12-16 05:37:57.657852] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:17.660 [2024-12-16 05:37:57.657862] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:17.660 [2024-12-16 05:37:57.657871] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:17.660 [2024-12-16 05:37:57.657880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:17.660 [2024-12-16 05:37:57.657896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:17.660 [2024-12-16 05:37:57.657914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.657931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.657947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.660 [2024-12-16 05:37:57.657980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.660 [2024-12-16 05:37:57.658059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.660 [2024-12-16 05:37:57.658071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.660 [2024-12-16 05:37:57.658078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.658085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.660 [2024-12-16 05:37:57.658107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.658120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.660 [2024-12-16 05:37:57.658128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.660 [2024-12-16 05:37:57.658163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.660 [2024-12-16 05:37:57.658175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.658200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.661 [2024-12-16 05:37:57.658210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.658234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.661 [2024-12-16 05:37:57.658244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.658274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.661 [2024-12-16 05:37:57.658284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:17.661 [2024-12-16 05:37:57.658318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:17.661 [2024-12-16 05:37:57.658331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.658354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.661 [2024-12-16 05:37:57.658386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.661 [2024-12-16 05:37:57.658398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:17.661 [2024-12-16 05:37:57.658421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:17.661 [2024-12-16 05:37:57.658429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.661 [2024-12-16 05:37:57.658436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.661 [2024-12-16 05:37:57.658554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.661 [2024-12-16 05:37:57.658569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.661 [2024-12-16 05:37:57.658576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.661 [2024-12-16 05:37:57.658599] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:17.661 [2024-12-16 05:37:57.658610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:17.661 [2024-12-16 05:37:57.658649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.658676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.661 [2024-12-16 05:37:57.658726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.661 [2024-12-16 05:37:57.658845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.661 [2024-12-16 05:37:57.658857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.661 [2024-12-16 05:37:57.658864] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658872] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:17.661 [2024-12-16 05:37:57.658883] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:17.661 [2024-12-16 05:37:57.658891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658904] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658911] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.661 [2024-12-16 05:37:57.658937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.661 [2024-12-16 05:37:57.658943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.658954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.661 [2024-12-16 05:37:57.658980] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:17.661 [2024-12-16 05:37:57.659036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.659064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.661 [2024-12-16 05:37:57.659077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.659107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.661 [2024-12-16 05:37:57.659141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.661 [2024-12-16 05:37:57.659162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:17.661 [2024-12-16 05:37:57.659381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.661 [2024-12-16 05:37:57.659404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.661 [2024-12-16 05:37:57.659413] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659420] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:17.661 [2024-12-16 05:37:57.659429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:17.661 [2024-12-16 05:37:57.659437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659449] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659456] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.661 [2024-12-16 05:37:57.659478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.661 [2024-12-16 05:37:57.659484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:17.661 [2024-12-16 05:37:57.659519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.661 [2024-12-16 05:37:57.659531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.661 [2024-12-16 05:37:57.659536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.659543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.661 [2024-12-16 05:37:57.659576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.663620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.663661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.661 [2024-12-16 05:37:57.663714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.661 [2024-12-16 05:37:57.663833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.661 [2024-12-16 05:37:57.663874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.661 [2024-12-16 05:37:57.663881] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.663888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:17.661 [2024-12-16 05:37:57.663897] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:17.661 [2024-12-16 05:37:57.663904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.663918] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.663926] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.663943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.661 [2024-12-16 05:37:57.663957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.661 [2024-12-16 05:37:57.663964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.663971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.661 [2024-12-16 05:37:57.663998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.664009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.661 [2024-12-16 05:37:57.664024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.661 [2024-12-16 05:37:57.664064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.661 [2024-12-16 05:37:57.664178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.661 [2024-12-16 05:37:57.664205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.661 [2024-12-16 05:37:57.664214] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.664221] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:17.661 [2024-12-16 05:37:57.664244] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:17.661 [2024-12-16 05:37:57.664251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.664262] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.664269] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.664290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.661 [2024-12-16 05:37:57.664302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.661 [2024-12-16 05:37:57.664308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.661 [2024-12-16 05:37:57.664314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.662 ===================================================== 00:20:17.662 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:17.662 ===================================================== 00:20:17.662 Controller Capabilities/Features 00:20:17.662 ================================ 00:20:17.662 Vendor ID: 0000 00:20:17.662 Subsystem Vendor ID: 0000 00:20:17.662 Serial Number: .................... 00:20:17.662 Model Number: ........................................ 00:20:17.662 Firmware Version: 25.01 00:20:17.662 Recommended Arb Burst: 0 00:20:17.662 IEEE OUI Identifier: 00 00 00 00:20:17.662 Multi-path I/O 00:20:17.662 May have multiple subsystem ports: No 00:20:17.662 May have multiple controllers: No 00:20:17.662 Associated with SR-IOV VF: No 00:20:17.662 Max Data Transfer Size: 131072 00:20:17.662 Max Number of Namespaces: 0 00:20:17.662 Max Number of I/O Queues: 1024 00:20:17.662 NVMe Specification Version (VS): 1.3 00:20:17.662 NVMe Specification Version (Identify): 1.3 00:20:17.662 Maximum Queue Entries: 128 00:20:17.662 Contiguous Queues Required: Yes 00:20:17.662 Arbitration Mechanisms Supported 00:20:17.662 Weighted Round Robin: Not Supported 00:20:17.662 Vendor Specific: Not Supported 00:20:17.662 Reset Timeout: 15000 ms 00:20:17.662 Doorbell Stride: 4 bytes 00:20:17.662 NVM Subsystem Reset: Not Supported 00:20:17.662 Command Sets Supported 00:20:17.662 NVM Command Set: Supported 00:20:17.662 Boot Partition: Not Supported 00:20:17.662 Memory Page Size Minimum: 4096 bytes 00:20:17.662 Memory Page Size Maximum: 4096 bytes 00:20:17.662 Persistent Memory Region: Not Supported 00:20:17.662 Optional Asynchronous Events Supported 00:20:17.662 Namespace Attribute Notices: Not Supported 00:20:17.662 Firmware Activation Notices: Not Supported 00:20:17.662 ANA Change Notices: Not Supported 00:20:17.662 PLE Aggregate Log Change Notices: Not Supported 00:20:17.662 LBA Status Info Alert Notices: Not Supported 00:20:17.662 EGE Aggregate Log Change Notices: Not Supported 00:20:17.662 Normal NVM Subsystem Shutdown event: Not Supported 00:20:17.662 Zone Descriptor Change Notices: Not Supported 00:20:17.662 Discovery Log Change Notices: Supported 00:20:17.662 Controller Attributes 00:20:17.662 128-bit Host Identifier: Not Supported 00:20:17.662 Non-Operational Permissive Mode: Not Supported 00:20:17.662 NVM Sets: Not Supported 00:20:17.662 Read Recovery Levels: Not Supported 00:20:17.662 Endurance Groups: Not Supported 00:20:17.662 Predictable Latency Mode: Not Supported 00:20:17.662 Traffic Based Keep ALive: Not Supported 00:20:17.662 Namespace Granularity: Not Supported 00:20:17.662 SQ Associations: Not Supported 00:20:17.662 UUID List: Not Supported 00:20:17.662 Multi-Domain Subsystem: Not Supported 00:20:17.662 Fixed Capacity Management: Not Supported 00:20:17.662 Variable Capacity Management: Not Supported 00:20:17.662 Delete Endurance Group: Not Supported 00:20:17.662 Delete NVM Set: Not Supported 00:20:17.662 Extended LBA Formats Supported: Not Supported 00:20:17.662 Flexible Data Placement Supported: Not Supported 00:20:17.662 00:20:17.662 Controller Memory Buffer Support 00:20:17.662 ================================ 00:20:17.662 Supported: No 00:20:17.662 00:20:17.662 Persistent Memory Region Support 00:20:17.662 ================================ 00:20:17.662 Supported: No 00:20:17.662 00:20:17.662 Admin Command Set Attributes 00:20:17.662 ============================ 00:20:17.662 Security Send/Receive: Not Supported 00:20:17.662 Format NVM: Not Supported 00:20:17.662 Firmware Activate/Download: Not Supported 00:20:17.662 Namespace Management: Not Supported 00:20:17.662 Device Self-Test: Not Supported 00:20:17.662 Directives: Not Supported 00:20:17.662 NVMe-MI: Not Supported 00:20:17.662 Virtualization Management: Not Supported 00:20:17.662 Doorbell Buffer Config: Not Supported 00:20:17.662 Get LBA Status Capability: Not Supported 00:20:17.662 Command & Feature Lockdown Capability: Not Supported 00:20:17.662 Abort Command Limit: 1 00:20:17.662 Async Event Request Limit: 4 00:20:17.662 Number of Firmware Slots: N/A 00:20:17.662 Firmware Slot 1 Read-Only: N/A 00:20:17.662 Firmware Activation Without Reset: N/A 00:20:17.662 Multiple Update Detection Support: N/A 00:20:17.662 Firmware Update Granularity: No Information Provided 00:20:17.662 Per-Namespace SMART Log: No 00:20:17.662 Asymmetric Namespace Access Log Page: Not Supported 00:20:17.662 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:17.662 Command Effects Log Page: Not Supported 00:20:17.662 Get Log Page Extended Data: Supported 00:20:17.662 Telemetry Log Pages: Not Supported 00:20:17.662 Persistent Event Log Pages: Not Supported 00:20:17.662 Supported Log Pages Log Page: May Support 00:20:17.662 Commands Supported & Effects Log Page: Not Supported 00:20:17.662 Feature Identifiers & Effects Log Page:May Support 00:20:17.662 NVMe-MI Commands & Effects Log Page: May Support 00:20:17.662 Data Area 4 for Telemetry Log: Not Supported 00:20:17.662 Error Log Page Entries Supported: 128 00:20:17.662 Keep Alive: Not Supported 00:20:17.662 00:20:17.662 NVM Command Set Attributes 00:20:17.662 ========================== 00:20:17.662 Submission Queue Entry Size 00:20:17.662 Max: 1 00:20:17.662 Min: 1 00:20:17.662 Completion Queue Entry Size 00:20:17.662 Max: 1 00:20:17.662 Min: 1 00:20:17.662 Number of Namespaces: 0 00:20:17.662 Compare Command: Not Supported 00:20:17.662 Write Uncorrectable Command: Not Supported 00:20:17.662 Dataset Management Command: Not Supported 00:20:17.662 Write Zeroes Command: Not Supported 00:20:17.662 Set Features Save Field: Not Supported 00:20:17.662 Reservations: Not Supported 00:20:17.662 Timestamp: Not Supported 00:20:17.662 Copy: Not Supported 00:20:17.662 Volatile Write Cache: Not Present 00:20:17.662 Atomic Write Unit (Normal): 1 00:20:17.662 Atomic Write Unit (PFail): 1 00:20:17.662 Atomic Compare & Write Unit: 1 00:20:17.662 Fused Compare & Write: Supported 00:20:17.662 Scatter-Gather List 00:20:17.662 SGL Command Set: Supported 00:20:17.662 SGL Keyed: Supported 00:20:17.662 SGL Bit Bucket Descriptor: Not Supported 00:20:17.662 SGL Metadata Pointer: Not Supported 00:20:17.662 Oversized SGL: Not Supported 00:20:17.662 SGL Metadata Address: Not Supported 00:20:17.662 SGL Offset: Supported 00:20:17.662 Transport SGL Data Block: Not Supported 00:20:17.662 Replay Protected Memory Block: Not Supported 00:20:17.662 00:20:17.662 Firmware Slot Information 00:20:17.662 ========================= 00:20:17.662 Active slot: 0 00:20:17.662 00:20:17.662 00:20:17.662 Error Log 00:20:17.662 ========= 00:20:17.662 00:20:17.662 Active Namespaces 00:20:17.662 ================= 00:20:17.662 Discovery Log Page 00:20:17.662 ================== 00:20:17.662 Generation Counter: 2 00:20:17.662 Number of Records: 2 00:20:17.662 Record Format: 0 00:20:17.662 00:20:17.662 Discovery Log Entry 0 00:20:17.662 ---------------------- 00:20:17.662 Transport Type: 3 (TCP) 00:20:17.662 Address Family: 1 (IPv4) 00:20:17.662 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:17.662 Entry Flags: 00:20:17.662 Duplicate Returned Information: 1 00:20:17.662 Explicit Persistent Connection Support for Discovery: 1 00:20:17.662 Transport Requirements: 00:20:17.662 Secure Channel: Not Required 00:20:17.662 Port ID: 0 (0x0000) 00:20:17.662 Controller ID: 65535 (0xffff) 00:20:17.662 Admin Max SQ Size: 128 00:20:17.662 Transport Service Identifier: 4420 00:20:17.662 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:17.662 Transport Address: 10.0.0.3 00:20:17.662 Discovery Log Entry 1 00:20:17.662 ---------------------- 00:20:17.662 Transport Type: 3 (TCP) 00:20:17.662 Address Family: 1 (IPv4) 00:20:17.662 Subsystem Type: 2 (NVM Subsystem) 00:20:17.662 Entry Flags: 00:20:17.662 Duplicate Returned Information: 0 00:20:17.662 Explicit Persistent Connection Support for Discovery: 0 00:20:17.662 Transport Requirements: 00:20:17.662 Secure Channel: Not Required 00:20:17.662 Port ID: 0 (0x0000) 00:20:17.662 Controller ID: 65535 (0xffff) 00:20:17.662 Admin Max SQ Size: 128 00:20:17.662 Transport Service Identifier: 4420 00:20:17.662 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:17.662 Transport Address: 10.0.0.3 [2024-12-16 05:37:57.664468] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:17.662 [2024-12-16 05:37:57.664495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.662 [2024-12-16 05:37:57.664510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.662 [2024-12-16 05:37:57.664520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:17.662 [2024-12-16 05:37:57.664529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.662 [2024-12-16 05:37:57.664537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.664546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.663 [2024-12-16 05:37:57.664553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.664562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.663 [2024-12-16 05:37:57.664582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.664623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.664633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.664648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.664684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.664775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.664789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.664797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.664806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.664825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.664837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.664845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.664877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.664920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.665093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.665113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.665120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.665138] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:17.663 [2024-12-16 05:37:57.665148] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:17.663 [2024-12-16 05:37:57.665167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.665212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.665243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.665314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.665326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.665335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.665363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.665392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.665420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.665495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.665512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.665519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.665544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.665573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.665601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.665677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.665690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.665696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.665722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.665751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.665779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.665845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.665857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.665863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.665902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.665917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.665935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.665977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.666044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.666055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.666061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.666088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.666116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.666142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.666206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.666223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.666233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.666257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.666285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.666311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.666380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.666391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.666398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.666423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.666451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.666476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.666540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.663 [2024-12-16 05:37:57.666551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.663 [2024-12-16 05:37:57.666557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.663 [2024-12-16 05:37:57.666580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.663 [2024-12-16 05:37:57.666628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.663 [2024-12-16 05:37:57.666647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.663 [2024-12-16 05:37:57.666678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.663 [2024-12-16 05:37:57.666740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.664 [2024-12-16 05:37:57.666752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.664 [2024-12-16 05:37:57.666758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.666765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.664 [2024-12-16 05:37:57.666783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.666794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.666802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.664 [2024-12-16 05:37:57.666814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.664 [2024-12-16 05:37:57.666841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.664 [2024-12-16 05:37:57.666911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.664 [2024-12-16 05:37:57.666926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.664 [2024-12-16 05:37:57.666933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.666940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.664 [2024-12-16 05:37:57.666960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.666984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.666990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.664 [2024-12-16 05:37:57.667002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.664 [2024-12-16 05:37:57.667031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.664 [2024-12-16 05:37:57.667107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.664 [2024-12-16 05:37:57.667131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.664 [2024-12-16 05:37:57.667138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.664 [2024-12-16 05:37:57.667163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.664 [2024-12-16 05:37:57.667191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.664 [2024-12-16 05:37:57.667219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.664 [2024-12-16 05:37:57.667286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.664 [2024-12-16 05:37:57.667311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.664 [2024-12-16 05:37:57.667318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.664 [2024-12-16 05:37:57.667343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.664 [2024-12-16 05:37:57.667369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.664 [2024-12-16 05:37:57.667396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.664 [2024-12-16 05:37:57.667460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.664 [2024-12-16 05:37:57.667485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.664 [2024-12-16 05:37:57.667492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.664 [2024-12-16 05:37:57.667517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.667535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.664 [2024-12-16 05:37:57.667548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.664 [2024-12-16 05:37:57.667578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.664 [2024-12-16 05:37:57.671709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.664 [2024-12-16 05:37:57.671737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.664 [2024-12-16 05:37:57.671746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.671754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.664 [2024-12-16 05:37:57.671778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.671787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.671794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.664 [2024-12-16 05:37:57.671811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.664 [2024-12-16 05:37:57.671865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.664 [2024-12-16 05:37:57.671945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.664 [2024-12-16 05:37:57.671958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.664 [2024-12-16 05:37:57.671964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.664 [2024-12-16 05:37:57.671972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.664 [2024-12-16 05:37:57.671987] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:20:17.664 00:20:17.664 05:37:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:17.664 [2024-12-16 05:37:57.788053] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:17.664 [2024-12-16 05:37:57.788195] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81370 ] 00:20:17.926 [2024-12-16 05:37:57.977456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:17.926 [2024-12-16 05:37:57.977629] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:17.926 [2024-12-16 05:37:57.977648] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:17.926 [2024-12-16 05:37:57.977673] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:17.926 [2024-12-16 05:37:57.977691] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:17.926 [2024-12-16 05:37:57.978128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:17.926 [2024-12-16 05:37:57.978213] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:17.926 [2024-12-16 05:37:57.991705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:17.926 [2024-12-16 05:37:57.991762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:17.926 [2024-12-16 05:37:57.991775] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:17.926 [2024-12-16 05:37:57.991782] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:17.926 [2024-12-16 05:37:57.991911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:57.991929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:57.991938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.926 [2024-12-16 05:37:57.991965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:17.926 [2024-12-16 05:37:57.992011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.926 [2024-12-16 05:37:57.999618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.926 [2024-12-16 05:37:57.999650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.926 [2024-12-16 05:37:57.999675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:57.999685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.926 [2024-12-16 05:37:57.999709] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:17.926 [2024-12-16 05:37:57.999726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:17.926 [2024-12-16 05:37:57.999739] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:17.926 [2024-12-16 05:37:57.999769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:57.999784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:57.999792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.926 [2024-12-16 05:37:57.999808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.926 [2024-12-16 05:37:57.999872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.926 [2024-12-16 05:37:57.999968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.926 [2024-12-16 05:37:57.999983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.926 [2024-12-16 05:37:57.999991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:58.000003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.926 [2024-12-16 05:37:58.000019] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:17.926 [2024-12-16 05:37:58.000034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:17.926 [2024-12-16 05:37:58.000049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:58.000058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.926 [2024-12-16 05:37:58.000066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.926 [2024-12-16 05:37:58.000093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.926 [2024-12-16 05:37:58.000130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.926 [2024-12-16 05:37:58.000236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.927 [2024-12-16 05:37:58.000248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.927 [2024-12-16 05:37:58.000255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.927 [2024-12-16 05:37:58.000274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:17.927 [2024-12-16 05:37:58.000288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:17.927 [2024-12-16 05:37:58.000302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.000336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.927 [2024-12-16 05:37:58.000369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.927 [2024-12-16 05:37:58.000432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.927 [2024-12-16 05:37:58.000444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.927 [2024-12-16 05:37:58.000450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.927 [2024-12-16 05:37:58.000468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:17.927 [2024-12-16 05:37:58.000489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.000521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.927 [2024-12-16 05:37:58.000549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.927 [2024-12-16 05:37:58.000622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.927 [2024-12-16 05:37:58.000639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.927 [2024-12-16 05:37:58.000647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.927 [2024-12-16 05:37:58.000697] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:17.927 [2024-12-16 05:37:58.000712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:17.927 [2024-12-16 05:37:58.000727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:17.927 [2024-12-16 05:37:58.000838] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:17.927 [2024-12-16 05:37:58.000848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:17.927 [2024-12-16 05:37:58.000863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.000879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.000894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.927 [2024-12-16 05:37:58.000926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.927 [2024-12-16 05:37:58.000995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.927 [2024-12-16 05:37:58.001008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.927 [2024-12-16 05:37:58.001015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.927 [2024-12-16 05:37:58.001049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:17.927 [2024-12-16 05:37:58.001070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.001104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.927 [2024-12-16 05:37:58.001133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.927 [2024-12-16 05:37:58.001205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.927 [2024-12-16 05:37:58.001217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.927 [2024-12-16 05:37:58.001228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.927 [2024-12-16 05:37:58.001250] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:17.927 [2024-12-16 05:37:58.001260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:17.927 [2024-12-16 05:37:58.001287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:17.927 [2024-12-16 05:37:58.001307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:17.927 [2024-12-16 05:37:58.001331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.001360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.927 [2024-12-16 05:37:58.001393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.927 [2024-12-16 05:37:58.001520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.927 [2024-12-16 05:37:58.001544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.927 [2024-12-16 05:37:58.001553] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001561] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:17.927 [2024-12-16 05:37:58.001571] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:17.927 [2024-12-16 05:37:58.001583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001633] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001644] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.927 [2024-12-16 05:37:58.001669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.927 [2024-12-16 05:37:58.001676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.927 [2024-12-16 05:37:58.001703] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:17.927 [2024-12-16 05:37:58.001714] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:17.927 [2024-12-16 05:37:58.001728] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:17.927 [2024-12-16 05:37:58.001737] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:17.927 [2024-12-16 05:37:58.001748] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:17.927 [2024-12-16 05:37:58.001758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:17.927 [2024-12-16 05:37:58.001785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:17.927 [2024-12-16 05:37:58.001800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.001832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.927 [2024-12-16 05:37:58.001868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.927 [2024-12-16 05:37:58.001940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.927 [2024-12-16 05:37:58.001953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.927 [2024-12-16 05:37:58.001974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.001981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.927 [2024-12-16 05:37:58.001998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.002033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.927 [2024-12-16 05:37:58.002054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.002082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.927 [2024-12-16 05:37:58.002093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:17.927 [2024-12-16 05:37:58.002116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.927 [2024-12-16 05:37:58.002126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.927 [2024-12-16 05:37:58.002139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.928 [2024-12-16 05:37:58.002149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.928 [2024-12-16 05:37:58.002158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.002182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.002194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.002205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.928 [2024-12-16 05:37:58.002219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-12-16 05:37:58.002253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:17.928 [2024-12-16 05:37:58.002265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:17.928 [2024-12-16 05:37:58.002273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:17.928 [2024-12-16 05:37:58.002281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.928 [2024-12-16 05:37:58.002289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.928 [2024-12-16 05:37:58.002394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.928 [2024-12-16 05:37:58.002406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.928 [2024-12-16 05:37:58.002413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.002420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.928 [2024-12-16 05:37:58.002434] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:17.928 [2024-12-16 05:37:58.002447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.002462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.002474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.002486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.002500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.002508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.928 [2024-12-16 05:37:58.002521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:17.928 [2024-12-16 05:37:58.002550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.928 [2024-12-16 05:37:58.002685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.928 [2024-12-16 05:37:58.002701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.928 [2024-12-16 05:37:58.002712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.002720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.928 [2024-12-16 05:37:58.002819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.002844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.002864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.002879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.928 [2024-12-16 05:37:58.002894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-12-16 05:37:58.002928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.928 [2024-12-16 05:37:58.003055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.928 [2024-12-16 05:37:58.003068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.928 [2024-12-16 05:37:58.003075] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003082] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:17.928 [2024-12-16 05:37:58.003090] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:17.928 [2024-12-16 05:37:58.003097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003113] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003123] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.928 [2024-12-16 05:37:58.003147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.928 [2024-12-16 05:37:58.003153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.928 [2024-12-16 05:37:58.003194] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:17.928 [2024-12-16 05:37:58.003216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.003254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.003274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.928 [2024-12-16 05:37:58.003306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-12-16 05:37:58.003340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.928 [2024-12-16 05:37:58.003452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.928 [2024-12-16 05:37:58.003467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.928 [2024-12-16 05:37:58.003475] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003482] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:17.928 [2024-12-16 05:37:58.003489] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:17.928 [2024-12-16 05:37:58.003497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003508] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003515] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.928 [2024-12-16 05:37:58.003538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.928 [2024-12-16 05:37:58.003544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.003554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.928 [2024-12-16 05:37:58.003589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.007619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.007659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.007670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.928 [2024-12-16 05:37:58.007688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.928 [2024-12-16 05:37:58.007725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.928 [2024-12-16 05:37:58.007823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.928 [2024-12-16 05:37:58.007846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.928 [2024-12-16 05:37:58.007872] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.007879] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:17.928 [2024-12-16 05:37:58.007888] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:17.928 [2024-12-16 05:37:58.007896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.007912] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.007921] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.007950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.928 [2024-12-16 05:37:58.007962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.928 [2024-12-16 05:37:58.007969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.928 [2024-12-16 05:37:58.007976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.928 [2024-12-16 05:37:58.008015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.008032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.008048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.008061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.008071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:17.928 [2024-12-16 05:37:58.008080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:17.929 [2024-12-16 05:37:58.008090] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:17.929 [2024-12-16 05:37:58.008102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:17.929 [2024-12-16 05:37:58.008112] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:17.929 [2024-12-16 05:37:58.008154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.008182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.008195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.008237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:17.929 [2024-12-16 05:37:58.008273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.929 [2024-12-16 05:37:58.008296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:17.929 [2024-12-16 05:37:58.008372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.008388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.008395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.008416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.008426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.008434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.008459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.008481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.008511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:17.929 [2024-12-16 05:37:58.008578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.008590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.008596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.008652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.008678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.008708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:17.929 [2024-12-16 05:37:58.008787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.008802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.008808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.008834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.008859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.008890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:17.929 [2024-12-16 05:37:58.008963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.008976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.008983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.008990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.009040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.009066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.009081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.009102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.009116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.009141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.009162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:17.929 [2024-12-16 05:37:58.009184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.929 [2024-12-16 05:37:58.009216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:17.929 [2024-12-16 05:37:58.009228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:17.929 [2024-12-16 05:37:58.009236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:17.929 [2024-12-16 05:37:58.009244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:17.929 [2024-12-16 05:37:58.009426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.929 [2024-12-16 05:37:58.009439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.929 [2024-12-16 05:37:58.009446] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009453] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:17.929 [2024-12-16 05:37:58.009465] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:17.929 [2024-12-16 05:37:58.009475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009508] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009519] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.929 [2024-12-16 05:37:58.009538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.929 [2024-12-16 05:37:58.009544] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009551] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:17.929 [2024-12-16 05:37:58.009559] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:17.929 [2024-12-16 05:37:58.009566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009597] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009622] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.929 [2024-12-16 05:37:58.009643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.929 [2024-12-16 05:37:58.009649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009656] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:17.929 [2024-12-16 05:37:58.009663] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:17.929 [2024-12-16 05:37:58.009670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009684] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009691] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:17.929 [2024-12-16 05:37:58.009714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:17.929 [2024-12-16 05:37:58.009720] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009727] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:17.929 [2024-12-16 05:37:58.009734] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:17.929 [2024-12-16 05:37:58.009741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009752] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009758] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.009786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.009793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.009828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.009840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.009846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.009869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.929 [2024-12-16 05:37:58.009880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.929 [2024-12-16 05:37:58.009886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.929 [2024-12-16 05:37:58.009893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:17.929 [2024-12-16 05:37:58.009909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.930 [2024-12-16 05:37:58.009920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.930 [2024-12-16 05:37:58.009926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.930 ===================================================== 00:20:17.930 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.930 ===================================================== 00:20:17.930 Controller Capabilities/Features 00:20:17.930 ================================ 00:20:17.930 Vendor ID: 8086 00:20:17.930 Subsystem Vendor ID: 8086 00:20:17.930 Serial Number: SPDK00000000000001 00:20:17.930 Model Number: SPDK bdev Controller 00:20:17.930 Firmware Version: 25.01 00:20:17.930 Recommended Arb Burst: 6 00:20:17.930 IEEE OUI Identifier: e4 d2 5c 00:20:17.930 Multi-path I/O 00:20:17.930 May have multiple subsystem ports: Yes 00:20:17.930 May have multiple controllers: Yes 00:20:17.930 Associated with SR-IOV VF: No 00:20:17.930 Max Data Transfer Size: 131072 00:20:17.930 Max Number of Namespaces: 32 00:20:17.930 Max Number of I/O Queues: 127 00:20:17.930 NVMe Specification Version (VS): 1.3 00:20:17.930 NVMe Specification Version (Identify): 1.3 00:20:17.930 Maximum Queue Entries: 128 00:20:17.930 Contiguous Queues Required: Yes 00:20:17.930 Arbitration Mechanisms Supported 00:20:17.930 Weighted Round Robin: Not Supported 00:20:17.930 Vendor Specific: Not Supported 00:20:17.930 Reset Timeout: 15000 ms 00:20:17.930 Doorbell Stride: 4 bytes 00:20:17.930 NVM Subsystem Reset: Not Supported 00:20:17.930 Command Sets Supported 00:20:17.930 NVM Command Set: Supported 00:20:17.930 Boot Partition: Not Supported 00:20:17.930 Memory Page Size Minimum: 4096 bytes 00:20:17.930 Memory Page Size Maximum: 4096 bytes 00:20:17.930 Persistent Memory Region: Not Supported 00:20:17.930 Optional Asynchronous Events Supported 00:20:17.930 Namespace Attribute Notices: Supported 00:20:17.930 Firmware Activation Notices: Not Supported 00:20:17.930 ANA Change Notices: Not Supported 00:20:17.930 PLE Aggregate Log Change Notices: Not Supported 00:20:17.930 LBA Status Info Alert Notices: Not Supported 00:20:17.930 EGE Aggregate Log Change Notices: Not Supported 00:20:17.930 Normal NVM Subsystem Shutdown event: Not Supported 00:20:17.930 Zone Descriptor Change Notices: Not Supported 00:20:17.930 Discovery Log Change Notices: Not Supported 00:20:17.930 Controller Attributes 00:20:17.930 128-bit Host Identifier: Supported 00:20:17.930 Non-Operational Permissive Mode: Not Supported 00:20:17.930 NVM Sets: Not Supported 00:20:17.930 Read Recovery Levels: Not Supported 00:20:17.930 Endurance Groups: Not Supported 00:20:17.930 Predictable Latency Mode: Not Supported 00:20:17.930 Traffic Based Keep ALive: Not Supported 00:20:17.930 Namespace Granularity: Not Supported 00:20:17.930 SQ Associations: Not Supported 00:20:17.930 UUID List: Not Supported 00:20:17.930 Multi-Domain Subsystem: Not Supported 00:20:17.930 Fixed Capacity Management: Not Supported 00:20:17.930 Variable Capacity Management: Not Supported 00:20:17.930 Delete Endurance Group: Not Supported 00:20:17.930 Delete NVM Set: Not Supported 00:20:17.930 Extended LBA Formats Supported: Not Supported 00:20:17.930 Flexible Data Placement Supported: Not Supported 00:20:17.930 00:20:17.930 Controller Memory Buffer Support 00:20:17.930 ================================ 00:20:17.930 Supported: No 00:20:17.930 00:20:17.930 Persistent Memory Region Support 00:20:17.930 ================================ 00:20:17.930 Supported: No 00:20:17.930 00:20:17.930 Admin Command Set Attributes 00:20:17.930 ============================ 00:20:17.930 Security Send/Receive: Not Supported 00:20:17.930 Format NVM: Not Supported 00:20:17.930 Firmware Activate/Download: Not Supported 00:20:17.930 Namespace Management: Not Supported 00:20:17.930 Device Self-Test: Not Supported 00:20:17.930 Directives: Not Supported 00:20:17.930 NVMe-MI: Not Supported 00:20:17.930 Virtualization Management: Not Supported 00:20:17.930 Doorbell Buffer Config: Not Supported 00:20:17.930 Get LBA Status Capability: Not Supported 00:20:17.930 Command & Feature Lockdown Capability: Not Supported 00:20:17.930 Abort Command Limit: 4 00:20:17.930 Async Event Request Limit: 4 00:20:17.930 Number of Firmware Slots: N/A 00:20:17.930 Firmware Slot 1 Read-Only: N/A 00:20:17.930 Firmware Activation Without Reset: [2024-12-16 05:37:58.009933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:17.930 N/A 00:20:17.930 Multiple Update Detection Support: N/A 00:20:17.930 Firmware Update Granularity: No Information Provided 00:20:17.930 Per-Namespace SMART Log: No 00:20:17.930 Asymmetric Namespace Access Log Page: Not Supported 00:20:17.930 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:17.930 Command Effects Log Page: Supported 00:20:17.930 Get Log Page Extended Data: Supported 00:20:17.930 Telemetry Log Pages: Not Supported 00:20:17.930 Persistent Event Log Pages: Not Supported 00:20:17.930 Supported Log Pages Log Page: May Support 00:20:17.930 Commands Supported & Effects Log Page: Not Supported 00:20:17.930 Feature Identifiers & Effects Log Page:May Support 00:20:17.930 NVMe-MI Commands & Effects Log Page: May Support 00:20:17.930 Data Area 4 for Telemetry Log: Not Supported 00:20:17.930 Error Log Page Entries Supported: 128 00:20:17.930 Keep Alive: Supported 00:20:17.930 Keep Alive Granularity: 10000 ms 00:20:17.930 00:20:17.930 NVM Command Set Attributes 00:20:17.930 ========================== 00:20:17.930 Submission Queue Entry Size 00:20:17.930 Max: 64 00:20:17.930 Min: 64 00:20:17.930 Completion Queue Entry Size 00:20:17.930 Max: 16 00:20:17.930 Min: 16 00:20:17.930 Number of Namespaces: 32 00:20:17.930 Compare Command: Supported 00:20:17.930 Write Uncorrectable Command: Not Supported 00:20:17.930 Dataset Management Command: Supported 00:20:17.930 Write Zeroes Command: Supported 00:20:17.930 Set Features Save Field: Not Supported 00:20:17.930 Reservations: Supported 00:20:17.930 Timestamp: Not Supported 00:20:17.930 Copy: Supported 00:20:17.930 Volatile Write Cache: Present 00:20:17.930 Atomic Write Unit (Normal): 1 00:20:17.930 Atomic Write Unit (PFail): 1 00:20:17.930 Atomic Compare & Write Unit: 1 00:20:17.930 Fused Compare & Write: Supported 00:20:17.930 Scatter-Gather List 00:20:17.930 SGL Command Set: Supported 00:20:17.930 SGL Keyed: Supported 00:20:17.930 SGL Bit Bucket Descriptor: Not Supported 00:20:17.930 SGL Metadata Pointer: Not Supported 00:20:17.930 Oversized SGL: Not Supported 00:20:17.930 SGL Metadata Address: Not Supported 00:20:17.930 SGL Offset: Supported 00:20:17.930 Transport SGL Data Block: Not Supported 00:20:17.930 Replay Protected Memory Block: Not Supported 00:20:17.930 00:20:17.930 Firmware Slot Information 00:20:17.930 ========================= 00:20:17.930 Active slot: 1 00:20:17.930 Slot 1 Firmware Revision: 25.01 00:20:17.930 00:20:17.930 00:20:17.930 Commands Supported and Effects 00:20:17.930 ============================== 00:20:17.930 Admin Commands 00:20:17.930 -------------- 00:20:17.930 Get Log Page (02h): Supported 00:20:17.930 Identify (06h): Supported 00:20:17.930 Abort (08h): Supported 00:20:17.930 Set Features (09h): Supported 00:20:17.930 Get Features (0Ah): Supported 00:20:17.930 Asynchronous Event Request (0Ch): Supported 00:20:17.930 Keep Alive (18h): Supported 00:20:17.930 I/O Commands 00:20:17.930 ------------ 00:20:17.930 Flush (00h): Supported LBA-Change 00:20:17.930 Write (01h): Supported LBA-Change 00:20:17.930 Read (02h): Supported 00:20:17.930 Compare (05h): Supported 00:20:17.930 Write Zeroes (08h): Supported LBA-Change 00:20:17.930 Dataset Management (09h): Supported LBA-Change 00:20:17.930 Copy (19h): Supported LBA-Change 00:20:17.930 00:20:17.930 Error Log 00:20:17.930 ========= 00:20:17.930 00:20:17.930 Arbitration 00:20:17.930 =========== 00:20:17.930 Arbitration Burst: 1 00:20:17.930 00:20:17.930 Power Management 00:20:17.930 ================ 00:20:17.930 Number of Power States: 1 00:20:17.930 Current Power State: Power State #0 00:20:17.930 Power State #0: 00:20:17.930 Max Power: 0.00 W 00:20:17.930 Non-Operational State: Operational 00:20:17.930 Entry Latency: Not Reported 00:20:17.930 Exit Latency: Not Reported 00:20:17.930 Relative Read Throughput: 0 00:20:17.930 Relative Read Latency: 0 00:20:17.930 Relative Write Throughput: 0 00:20:17.930 Relative Write Latency: 0 00:20:17.930 Idle Power: Not Reported 00:20:17.930 Active Power: Not Reported 00:20:17.930 Non-Operational Permissive Mode: Not Supported 00:20:17.930 00:20:17.930 Health Information 00:20:17.930 ================== 00:20:17.930 Critical Warnings: 00:20:17.930 Available Spare Space: OK 00:20:17.930 Temperature: OK 00:20:17.930 Device Reliability: OK 00:20:17.930 Read Only: No 00:20:17.930 Volatile Memory Backup: OK 00:20:17.930 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:17.930 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:17.931 Available Spare: 0% 00:20:17.931 Available Spare Threshold: 0% 00:20:17.931 Life Percentage Used:[2024-12-16 05:37:58.010101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.010129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.010164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:17.931 [2024-12-16 05:37:58.010240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.010254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.931 [2024-12-16 05:37:58.010261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.010377] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:17.931 [2024-12-16 05:37:58.010411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.010426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-12-16 05:37:58.010441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.010451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-12-16 05:37:58.010459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.010468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-12-16 05:37:58.010477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.010486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:17.931 [2024-12-16 05:37:58.010501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.010532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.010566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.931 [2024-12-16 05:37:58.010673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.010689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.931 [2024-12-16 05:37:58.010700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.010725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.010758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.010793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.931 [2024-12-16 05:37:58.010916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.010938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.931 [2024-12-16 05:37:58.010946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.010953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.010963] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:17.931 [2024-12-16 05:37:58.010972] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:17.931 [2024-12-16 05:37:58.010993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.011023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.011070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.931 [2024-12-16 05:37:58.011130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.011141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.931 [2024-12-16 05:37:58.011147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.011174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.011206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.011233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.931 [2024-12-16 05:37:58.011296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.011307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.931 [2024-12-16 05:37:58.011314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.011343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.011371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.011397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.931 [2024-12-16 05:37:58.011469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.011481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.931 [2024-12-16 05:37:58.011488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.011513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.011527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.011543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.011571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.931 [2024-12-16 05:37:58.015645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.015676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.931 [2024-12-16 05:37:58.015685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.015693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.931 [2024-12-16 05:37:58.015716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.015726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:17.931 [2024-12-16 05:37:58.015733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:17.931 [2024-12-16 05:37:58.015747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:17.931 [2024-12-16 05:37:58.015781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:17.931 [2024-12-16 05:37:58.015887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:17.931 [2024-12-16 05:37:58.015901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:17.932 [2024-12-16 05:37:58.015908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:17.932 [2024-12-16 05:37:58.015915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:17.932 [2024-12-16 05:37:58.015930] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:20:17.932 0% 00:20:17.932 Data Units Read: 0 00:20:17.932 Data Units Written: 0 00:20:17.932 Host Read Commands: 0 00:20:17.932 Host Write Commands: 0 00:20:17.932 Controller Busy Time: 0 minutes 00:20:17.932 Power Cycles: 0 00:20:17.932 Power On Hours: 0 hours 00:20:17.932 Unsafe Shutdowns: 0 00:20:17.932 Unrecoverable Media Errors: 0 00:20:17.932 Lifetime Error Log Entries: 0 00:20:17.932 Warning Temperature Time: 0 minutes 00:20:17.932 Critical Temperature Time: 0 minutes 00:20:17.932 00:20:17.932 Number of Queues 00:20:17.932 ================ 00:20:17.932 Number of I/O Submission Queues: 127 00:20:17.932 Number of I/O Completion Queues: 127 00:20:17.932 00:20:17.932 Active Namespaces 00:20:17.932 ================= 00:20:17.932 Namespace ID:1 00:20:17.932 Error Recovery Timeout: Unlimited 00:20:17.932 Command Set Identifier: NVM (00h) 00:20:17.932 Deallocate: Supported 00:20:17.932 Deallocated/Unwritten Error: Not Supported 00:20:17.932 Deallocated Read Value: Unknown 00:20:17.932 Deallocate in Write Zeroes: Not Supported 00:20:17.932 Deallocated Guard Field: 0xFFFF 00:20:17.932 Flush: Supported 00:20:17.932 Reservation: Supported 00:20:17.932 Namespace Sharing Capabilities: Multiple Controllers 00:20:17.932 Size (in LBAs): 131072 (0GiB) 00:20:17.932 Capacity (in LBAs): 131072 (0GiB) 00:20:17.932 Utilization (in LBAs): 131072 (0GiB) 00:20:17.932 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:17.932 EUI64: ABCDEF0123456789 00:20:17.932 UUID: 261745ff-e76d-4df3-ac28-a194a8c2cd61 00:20:17.932 Thin Provisioning: Not Supported 00:20:17.932 Per-NS Atomic Units: Yes 00:20:17.932 Atomic Boundary Size (Normal): 0 00:20:17.932 Atomic Boundary Size (PFail): 0 00:20:17.932 Atomic Boundary Offset: 0 00:20:17.932 Maximum Single Source Range Length: 65535 00:20:17.932 Maximum Copy Length: 65535 00:20:17.932 Maximum Source Range Count: 1 00:20:17.932 NGUID/EUI64 Never Reused: No 00:20:17.932 Namespace Write Protected: No 00:20:17.932 Number of LBA Formats: 1 00:20:17.932 Current LBA Format: LBA Format #00 00:20:17.932 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:17.932 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:17.932 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:17.932 rmmod nvme_tcp 00:20:17.932 rmmod nvme_fabrics 00:20:18.190 rmmod nvme_keyring 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 81326 ']' 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 81326 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 81326 ']' 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 81326 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81326 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81326' 00:20:18.190 killing process with pid 81326 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 81326 00:20:18.190 05:37:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 81326 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:19.125 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:20:19.383 00:20:19.383 real 0m4.039s 00:20:19.383 user 0m10.906s 00:20:19.383 sys 0m0.942s 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:19.383 ************************************ 00:20:19.383 END TEST nvmf_identify 00:20:19.383 ************************************ 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.383 ************************************ 00:20:19.383 START TEST nvmf_perf 00:20:19.383 ************************************ 00:20:19.383 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:19.643 * Looking for test storage... 00:20:19.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.643 --rc genhtml_branch_coverage=1 00:20:19.643 --rc genhtml_function_coverage=1 00:20:19.643 --rc genhtml_legend=1 00:20:19.643 --rc geninfo_all_blocks=1 00:20:19.643 --rc geninfo_unexecuted_blocks=1 00:20:19.643 00:20:19.643 ' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.643 --rc genhtml_branch_coverage=1 00:20:19.643 --rc genhtml_function_coverage=1 00:20:19.643 --rc genhtml_legend=1 00:20:19.643 --rc geninfo_all_blocks=1 00:20:19.643 --rc geninfo_unexecuted_blocks=1 00:20:19.643 00:20:19.643 ' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.643 --rc genhtml_branch_coverage=1 00:20:19.643 --rc genhtml_function_coverage=1 00:20:19.643 --rc genhtml_legend=1 00:20:19.643 --rc geninfo_all_blocks=1 00:20:19.643 --rc geninfo_unexecuted_blocks=1 00:20:19.643 00:20:19.643 ' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.643 --rc genhtml_branch_coverage=1 00:20:19.643 --rc genhtml_function_coverage=1 00:20:19.643 --rc genhtml_legend=1 00:20:19.643 --rc geninfo_all_blocks=1 00:20:19.643 --rc geninfo_unexecuted_blocks=1 00:20:19.643 00:20:19.643 ' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.643 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:19.643 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:19.644 Cannot find device "nvmf_init_br" 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:19.644 Cannot find device "nvmf_init_br2" 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:19.644 Cannot find device "nvmf_tgt_br" 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.644 Cannot find device "nvmf_tgt_br2" 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:19.644 Cannot find device "nvmf_init_br" 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:20:19.644 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:19.902 Cannot find device "nvmf_init_br2" 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:19.902 Cannot find device "nvmf_tgt_br" 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:19.902 Cannot find device "nvmf_tgt_br2" 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:19.902 Cannot find device "nvmf_br" 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:19.902 Cannot find device "nvmf_init_if" 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:19.902 Cannot find device "nvmf_init_if2" 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.902 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:19.903 05:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.903 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:20.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:20.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:20:20.162 00:20:20.162 --- 10.0.0.3 ping statistics --- 00:20:20.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.162 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:20.162 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:20.162 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:20:20.162 00:20:20.162 --- 10.0.0.4 ping statistics --- 00:20:20.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.162 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:20.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:20.162 00:20:20.162 --- 10.0.0.1 ping statistics --- 00:20:20.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.162 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:20.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:20.162 00:20:20.162 --- 10.0.0.2 ping statistics --- 00:20:20.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.162 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=81597 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 81597 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 81597 ']' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.162 05:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:20.162 [2024-12-16 05:38:00.346396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:20.162 [2024-12-16 05:38:00.346566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.421 [2024-12-16 05:38:00.515956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.421 [2024-12-16 05:38:00.609042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.421 [2024-12-16 05:38:00.609146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.421 [2024-12-16 05:38:00.609178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.421 [2024-12-16 05:38:00.609196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.421 [2024-12-16 05:38:00.609215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.421 [2024-12-16 05:38:00.611012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.421 [2024-12-16 05:38:00.611171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.421 [2024-12-16 05:38:00.611286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.421 [2024-12-16 05:38:00.611297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.679 [2024-12-16 05:38:00.789645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:21.245 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:21.811 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:21.811 05:38:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:22.069 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:22.069 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:22.331 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:22.331 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:22.331 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:22.331 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:22.331 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:22.603 [2024-12-16 05:38:02.701674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.603 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.861 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:22.861 05:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:23.119 05:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:23.119 05:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:23.377 05:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:23.635 [2024-12-16 05:38:03.704921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:23.635 05:38:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:23.893 05:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:23.893 05:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:23.893 05:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:23.893 05:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:25.266 Initializing NVMe Controllers 00:20:25.266 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:25.266 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:25.266 Initialization complete. Launching workers. 00:20:25.266 ======================================================== 00:20:25.266 Latency(us) 00:20:25.266 Device Information : IOPS MiB/s Average min max 00:20:25.266 PCIE (0000:00:10.0) NSID 1 from core 0: 21518.18 84.06 1486.41 379.76 8958.78 00:20:25.266 ======================================================== 00:20:25.266 Total : 21518.18 84.06 1486.41 379.76 8958.78 00:20:25.266 00:20:25.266 05:38:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:26.637 Initializing NVMe Controllers 00:20:26.637 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.637 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.637 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:26.637 Initialization complete. Launching workers. 00:20:26.637 ======================================================== 00:20:26.637 Latency(us) 00:20:26.637 Device Information : IOPS MiB/s Average min max 00:20:26.637 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2962.00 11.57 337.15 129.76 6225.68 00:20:26.637 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8046.62 5554.97 12056.10 00:20:26.637 ======================================================== 00:20:26.637 Total : 3087.00 12.06 649.33 129.76 12056.10 00:20:26.637 00:20:26.637 05:38:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:28.012 Initializing NVMe Controllers 00:20:28.012 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.012 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.012 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:28.012 Initialization complete. Launching workers. 00:20:28.012 ======================================================== 00:20:28.012 Latency(us) 00:20:28.012 Device Information : IOPS MiB/s Average min max 00:20:28.012 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7700.55 30.08 4155.60 844.78 10257.92 00:20:28.012 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3804.80 14.86 8422.76 5051.37 16208.10 00:20:28.012 ======================================================== 00:20:28.012 Total : 11505.35 44.94 5566.74 844.78 16208.10 00:20:28.012 00:20:28.012 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:28.012 05:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:31.295 Initializing NVMe Controllers 00:20:31.295 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.295 Controller IO queue size 128, less than required. 00:20:31.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.295 Controller IO queue size 128, less than required. 00:20:31.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.295 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.295 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.295 Initialization complete. Launching workers. 00:20:31.295 ======================================================== 00:20:31.295 Latency(us) 00:20:31.295 Device Information : IOPS MiB/s Average min max 00:20:31.295 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1584.31 396.08 83836.50 48918.90 311852.75 00:20:31.295 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.86 151.21 228501.60 103927.46 492976.67 00:20:31.295 ======================================================== 00:20:31.295 Total : 2189.17 547.29 123806.71 48918.90 492976.67 00:20:31.295 00:20:31.295 05:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:20:31.295 Initializing NVMe Controllers 00:20:31.295 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.295 Controller IO queue size 128, less than required. 00:20:31.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.295 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:31.295 Controller IO queue size 128, less than required. 00:20:31.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.295 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:31.295 WARNING: Some requested NVMe devices were skipped 00:20:31.295 No valid NVMe controllers or AIO or URING devices found 00:20:31.295 05:38:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:20:34.579 Initializing NVMe Controllers 00:20:34.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.579 Controller IO queue size 128, less than required. 00:20:34.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.579 Controller IO queue size 128, less than required. 00:20:34.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.579 Initialization complete. Launching workers. 00:20:34.579 00:20:34.579 ==================== 00:20:34.579 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:34.579 TCP transport: 00:20:34.579 polls: 6858 00:20:34.579 idle_polls: 4086 00:20:34.579 sock_completions: 2772 00:20:34.579 nvme_completions: 5497 00:20:34.579 submitted_requests: 8312 00:20:34.579 queued_requests: 1 00:20:34.579 00:20:34.579 ==================== 00:20:34.579 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:34.579 TCP transport: 00:20:34.579 polls: 9482 00:20:34.579 idle_polls: 5760 00:20:34.579 sock_completions: 3722 00:20:34.579 nvme_completions: 6041 00:20:34.579 submitted_requests: 9110 00:20:34.579 queued_requests: 1 00:20:34.579 ======================================================== 00:20:34.579 Latency(us) 00:20:34.579 Device Information : IOPS MiB/s Average min max 00:20:34.579 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1372.89 343.22 100947.80 44930.97 375618.99 00:20:34.579 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1508.78 377.19 84983.50 45278.25 243854.31 00:20:34.579 ======================================================== 00:20:34.579 Total : 2881.67 720.42 92589.24 44930.97 375618.99 00:20:34.579 00:20:34.579 05:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:34.579 05:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.579 05:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:34.579 05:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:34.579 05:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:34.838 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=1060a26b-1f58-461c-8c1e-cfebadb9052f 00:20:34.838 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 1060a26b-1f58-461c-8c1e-cfebadb9052f 00:20:34.838 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=1060a26b-1f58-461c-8c1e-cfebadb9052f 00:20:34.838 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:34.838 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:20:34.838 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:20:34.838 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:35.095 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:35.095 { 00:20:35.095 "uuid": "1060a26b-1f58-461c-8c1e-cfebadb9052f", 00:20:35.095 "name": "lvs_0", 00:20:35.095 "base_bdev": "Nvme0n1", 00:20:35.095 "total_data_clusters": 1278, 00:20:35.095 "free_clusters": 1278, 00:20:35.095 "block_size": 4096, 00:20:35.095 "cluster_size": 4194304 00:20:35.095 } 00:20:35.095 ]' 00:20:35.095 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1060a26b-1f58-461c-8c1e-cfebadb9052f") .free_clusters' 00:20:35.352 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:20:35.352 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1060a26b-1f58-461c-8c1e-cfebadb9052f") .cluster_size' 00:20:35.352 5112 00:20:35.352 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:20:35.352 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:20:35.352 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:20:35.352 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:35.352 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1060a26b-1f58-461c-8c1e-cfebadb9052f lbd_0 5112 00:20:35.610 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4758218e-b9a4-4cb0-9308-563ec6642dbd 00:20:35.610 05:38:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4758218e-b9a4-4cb0-9308-563ec6642dbd lvs_n_0 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=84d3de17-4f38-4436-806c-868afcb0cc69 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 84d3de17-4f38-4436-806c-868afcb0cc69 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=84d3de17-4f38-4436-806c-868afcb0cc69 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:36.177 { 00:20:36.177 "uuid": "1060a26b-1f58-461c-8c1e-cfebadb9052f", 00:20:36.177 "name": "lvs_0", 00:20:36.177 "base_bdev": "Nvme0n1", 00:20:36.177 "total_data_clusters": 1278, 00:20:36.177 "free_clusters": 0, 00:20:36.177 "block_size": 4096, 00:20:36.177 "cluster_size": 4194304 00:20:36.177 }, 00:20:36.177 { 00:20:36.177 "uuid": "84d3de17-4f38-4436-806c-868afcb0cc69", 00:20:36.177 "name": "lvs_n_0", 00:20:36.177 "base_bdev": "4758218e-b9a4-4cb0-9308-563ec6642dbd", 00:20:36.177 "total_data_clusters": 1276, 00:20:36.177 "free_clusters": 1276, 00:20:36.177 "block_size": 4096, 00:20:36.177 "cluster_size": 4194304 00:20:36.177 } 00:20:36.177 ]' 00:20:36.177 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="84d3de17-4f38-4436-806c-868afcb0cc69") .free_clusters' 00:20:36.435 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:20:36.435 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="84d3de17-4f38-4436-806c-868afcb0cc69") .cluster_size' 00:20:36.435 5104 00:20:36.435 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:20:36.435 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:20:36.435 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:20:36.435 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:36.435 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 84d3de17-4f38-4436-806c-868afcb0cc69 lbd_nest_0 5104 00:20:36.693 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5272605d-3ffb-4562-bb2e-e5c6b242511d 00:20:36.693 05:38:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.951 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:36.951 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5272605d-3ffb-4562-bb2e-e5c6b242511d 00:20:37.209 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:37.467 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:37.467 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:37.467 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:37.467 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:37.467 05:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:37.725 Initializing NVMe Controllers 00:20:37.725 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.725 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:37.725 WARNING: Some requested NVMe devices were skipped 00:20:37.725 No valid NVMe controllers or AIO or URING devices found 00:20:37.983 05:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:37.984 05:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:50.222 Initializing NVMe Controllers 00:20:50.222 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.222 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.222 Initialization complete. Launching workers. 00:20:50.222 ======================================================== 00:20:50.222 Latency(us) 00:20:50.222 Device Information : IOPS MiB/s Average min max 00:20:50.222 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 836.00 104.50 1195.93 388.84 8912.88 00:20:50.222 ======================================================== 00:20:50.222 Total : 836.00 104.50 1195.93 388.84 8912.88 00:20:50.222 00:20:50.222 05:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:50.222 05:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:50.222 05:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:50.222 Initializing NVMe Controllers 00:20:50.222 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.222 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:50.222 WARNING: Some requested NVMe devices were skipped 00:20:50.222 No valid NVMe controllers or AIO or URING devices found 00:20:50.222 05:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:50.222 05:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:00.205 Initializing NVMe Controllers 00:21:00.205 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.205 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.205 Initialization complete. Launching workers. 00:21:00.205 ======================================================== 00:21:00.205 Latency(us) 00:21:00.205 Device Information : IOPS MiB/s Average min max 00:21:00.205 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1343.96 168.00 23818.05 6614.47 67988.60 00:21:00.205 ======================================================== 00:21:00.205 Total : 1343.96 168.00 23818.05 6614.47 67988.60 00:21:00.205 00:21:00.205 05:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:00.205 05:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:00.205 05:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:00.205 Initializing NVMe Controllers 00:21:00.205 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.205 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:00.205 WARNING: Some requested NVMe devices were skipped 00:21:00.205 No valid NVMe controllers or AIO or URING devices found 00:21:00.205 05:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:00.205 05:38:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:10.192 Initializing NVMe Controllers 00:21:10.192 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:10.192 Controller IO queue size 128, less than required. 00:21:10.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:10.192 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:10.192 Initialization complete. Launching workers. 00:21:10.192 ======================================================== 00:21:10.192 Latency(us) 00:21:10.192 Device Information : IOPS MiB/s Average min max 00:21:10.192 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3599.66 449.96 35586.76 14396.34 83013.79 00:21:10.192 ======================================================== 00:21:10.192 Total : 3599.66 449.96 35586.76 14396.34 83013.79 00:21:10.192 00:21:10.192 05:38:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.192 05:38:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5272605d-3ffb-4562-bb2e-e5c6b242511d 00:21:10.761 05:38:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:11.020 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4758218e-b9a4-4cb0-9308-563ec6642dbd 00:21:11.279 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:11.279 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:11.279 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:11.279 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.279 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.539 rmmod nvme_tcp 00:21:11.539 rmmod nvme_fabrics 00:21:11.539 rmmod nvme_keyring 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 81597 ']' 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 81597 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 81597 ']' 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 81597 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81597 00:21:11.539 killing process with pid 81597 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81597' 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 81597 00:21:11.539 05:38:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 81597 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:13.447 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:13.706 00:21:13.706 real 0m54.268s 00:21:13.706 user 3m25.188s 00:21:13.706 sys 0m12.376s 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:13.706 ************************************ 00:21:13.706 END TEST nvmf_perf 00:21:13.706 ************************************ 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.706 ************************************ 00:21:13.706 START TEST nvmf_fio_host 00:21:13.706 ************************************ 00:21:13.706 05:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:13.966 * Looking for test storage... 00:21:13.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.966 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:13.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.967 --rc genhtml_branch_coverage=1 00:21:13.967 --rc genhtml_function_coverage=1 00:21:13.967 --rc genhtml_legend=1 00:21:13.967 --rc geninfo_all_blocks=1 00:21:13.967 --rc geninfo_unexecuted_blocks=1 00:21:13.967 00:21:13.967 ' 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:13.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.967 --rc genhtml_branch_coverage=1 00:21:13.967 --rc genhtml_function_coverage=1 00:21:13.967 --rc genhtml_legend=1 00:21:13.967 --rc geninfo_all_blocks=1 00:21:13.967 --rc geninfo_unexecuted_blocks=1 00:21:13.967 00:21:13.967 ' 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:13.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.967 --rc genhtml_branch_coverage=1 00:21:13.967 --rc genhtml_function_coverage=1 00:21:13.967 --rc genhtml_legend=1 00:21:13.967 --rc geninfo_all_blocks=1 00:21:13.967 --rc geninfo_unexecuted_blocks=1 00:21:13.967 00:21:13.967 ' 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:13.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.967 --rc genhtml_branch_coverage=1 00:21:13.967 --rc genhtml_function_coverage=1 00:21:13.967 --rc genhtml_legend=1 00:21:13.967 --rc geninfo_all_blocks=1 00:21:13.967 --rc geninfo_unexecuted_blocks=1 00:21:13.967 00:21:13.967 ' 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.967 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.968 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:13.968 Cannot find device "nvmf_init_br" 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:13.968 Cannot find device "nvmf_init_br2" 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:13.968 Cannot find device "nvmf_tgt_br" 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.968 Cannot find device "nvmf_tgt_br2" 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:13.968 Cannot find device "nvmf_init_br" 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:13.968 Cannot find device "nvmf_init_br2" 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:13.968 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:14.227 Cannot find device "nvmf_tgt_br" 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:14.227 Cannot find device "nvmf_tgt_br2" 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:14.227 Cannot find device "nvmf_br" 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:14.227 Cannot find device "nvmf_init_if" 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:14.227 Cannot find device "nvmf_init_if2" 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:14.227 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:14.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:14.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:14.228 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:14.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:21:14.487 00:21:14.487 --- 10.0.0.3 ping statistics --- 00:21:14.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.487 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:14.487 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:14.487 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:21:14.487 00:21:14.487 --- 10.0.0.4 ping statistics --- 00:21:14.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.487 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:14.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:14.487 00:21:14.487 --- 10.0.0.1 ping statistics --- 00:21:14.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.487 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:14.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:21:14.487 00:21:14.487 --- 10.0.0.2 ping statistics --- 00:21:14.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.487 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:14.487 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=82507 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 82507 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 82507 ']' 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.488 05:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.488 [2024-12-16 05:38:54.665939] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:14.488 [2024-12-16 05:38:54.666108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.747 [2024-12-16 05:38:54.857075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.747 [2024-12-16 05:38:54.986172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.747 [2024-12-16 05:38:54.986248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.747 [2024-12-16 05:38:54.986275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.747 [2024-12-16 05:38:54.986292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.747 [2024-12-16 05:38:54.986310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.747 [2024-12-16 05:38:54.988581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.747 [2024-12-16 05:38:54.988729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.747 [2024-12-16 05:38:54.988848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.747 [2024-12-16 05:38:54.988927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.006 [2024-12-16 05:38:55.200538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:15.573 05:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.573 05:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:15.573 05:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:15.573 [2024-12-16 05:38:55.826468] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.833 05:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:15.833 05:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.833 05:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.833 05:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:16.092 Malloc1 00:21:16.092 05:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.351 05:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:16.611 05:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:16.870 [2024-12-16 05:38:56.973935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:16.870 05:38:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:17.129 05:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:17.388 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:17.388 fio-3.35 00:21:17.388 Starting 1 thread 00:21:19.925 00:21:19.925 test: (groupid=0, jobs=1): err= 0: pid=82577: Mon Dec 16 05:38:59 2024 00:21:19.925 read: IOPS=7667, BW=30.0MiB/s (31.4MB/s)(60.1MiB/2008msec) 00:21:19.925 slat (usec): min=2, max=207, avg= 2.98, stdev= 2.76 00:21:19.925 clat (usec): min=2114, max=14747, avg=8669.33, stdev=663.41 00:21:19.925 lat (usec): min=2145, max=14749, avg=8672.31, stdev=663.25 00:21:19.925 clat percentiles (usec): 00:21:19.925 | 1.00th=[ 7373], 5.00th=[ 7767], 10.00th=[ 7963], 20.00th=[ 8160], 00:21:19.925 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:21:19.925 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:21:19.925 | 99.00th=[10421], 99.50th=[10945], 99.90th=[13698], 99.95th=[14091], 00:21:19.925 | 99.99th=[14746] 00:21:19.925 bw ( KiB/s): min=28904, max=31584, per=99.97%, avg=30660.00, stdev=1195.66, samples=4 00:21:19.925 iops : min= 7226, max= 7896, avg=7665.00, stdev=298.92, samples=4 00:21:19.925 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(60.0MiB/2008msec); 0 zone resets 00:21:19.925 slat (usec): min=2, max=155, avg= 3.06, stdev= 2.10 00:21:19.925 clat (usec): min=1696, max=14215, avg=7924.22, stdev=625.85 00:21:19.925 lat (usec): min=1724, max=14217, avg=7927.28, stdev=625.83 00:21:19.925 clat percentiles (usec): 00:21:19.925 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7504], 00:21:19.925 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:21:19.925 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:21:19.925 | 99.00th=[ 9503], 99.50th=[10028], 99.90th=[12125], 99.95th=[13698], 00:21:19.925 | 99.99th=[14222] 00:21:19.925 bw ( KiB/s): min=29888, max=31136, per=100.00%, avg=30598.00, stdev=604.74, samples=4 00:21:19.925 iops : min= 7472, max= 7784, avg=7649.50, stdev=151.19, samples=4 00:21:19.925 lat (msec) : 2=0.01%, 4=0.13%, 10=98.37%, 20=1.49% 00:21:19.925 cpu : usr=71.65%, sys=21.38%, ctx=60, majf=0, minf=1554 00:21:19.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:19.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:19.925 issued rwts: total=15396,15360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:19.925 00:21:19.925 Run status group 0 (all jobs): 00:21:19.925 READ: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=60.1MiB (63.1MB), run=2008-2008msec 00:21:19.925 WRITE: bw=29.9MiB/s (31.3MB/s), 29.9MiB/s-29.9MiB/s (31.3MB/s-31.3MB/s), io=60.0MiB (62.9MB), run=2008-2008msec 00:21:19.925 ----------------------------------------------------- 00:21:19.925 Suppressions used: 00:21:19.925 count bytes template 00:21:19.925 1 57 /usr/src/fio/parse.c 00:21:19.925 1 8 libtcmalloc_minimal.so 00:21:19.925 ----------------------------------------------------- 00:21:19.925 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:19.925 05:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:20.184 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:20.184 fio-3.35 00:21:20.184 Starting 1 thread 00:21:22.720 00:21:22.720 test: (groupid=0, jobs=1): err= 0: pid=82623: Mon Dec 16 05:39:02 2024 00:21:22.720 read: IOPS=6980, BW=109MiB/s (114MB/s)(219MiB/2009msec) 00:21:22.720 slat (usec): min=3, max=173, avg= 4.45, stdev= 2.69 00:21:22.720 clat (usec): min=2574, max=21277, avg=10274.23, stdev=2939.60 00:21:22.720 lat (usec): min=2578, max=21282, avg=10278.68, stdev=2939.68 00:21:22.720 clat percentiles (usec): 00:21:22.720 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7635], 00:21:22.720 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10814], 00:21:22.720 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13960], 95.00th=[15795], 00:21:22.720 | 99.00th=[18482], 99.50th=[19268], 99.90th=[19792], 99.95th=[20055], 00:21:22.720 | 99.99th=[21365] 00:21:22.720 bw ( KiB/s): min=48128, max=62464, per=49.68%, avg=55481.00, stdev=7806.68, samples=4 00:21:22.720 iops : min= 3008, max= 3904, avg=3467.50, stdev=487.85, samples=4 00:21:22.720 write: IOPS=4074, BW=63.7MiB/s (66.8MB/s)(114MiB/1783msec); 0 zone resets 00:21:22.720 slat (usec): min=32, max=228, avg=40.08, stdev= 9.43 00:21:22.720 clat (usec): min=7808, max=26055, avg=14560.74, stdev=2661.72 00:21:22.720 lat (usec): min=7845, max=26106, avg=14600.82, stdev=2662.82 00:21:22.720 clat percentiles (usec): 00:21:22.720 | 1.00th=[ 9896], 5.00th=[10945], 10.00th=[11338], 20.00th=[12125], 00:21:22.720 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14222], 60.00th=[15008], 00:21:22.720 | 70.00th=[15926], 80.00th=[16909], 90.00th=[18220], 95.00th=[19268], 00:21:22.720 | 99.00th=[21103], 99.50th=[21627], 99.90th=[23200], 99.95th=[23462], 00:21:22.720 | 99.99th=[26084] 00:21:22.720 bw ( KiB/s): min=49632, max=65309, per=88.38%, avg=57615.25, stdev=8780.80, samples=4 00:21:22.720 iops : min= 3102, max= 4081, avg=3600.75, stdev=548.56, samples=4 00:21:22.720 lat (msec) : 4=0.09%, 10=33.16%, 20=65.80%, 50=0.94% 00:21:22.720 cpu : usr=81.47%, sys=14.19%, ctx=7, majf=0, minf=2182 00:21:22.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:22.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.720 issued rwts: total=14023,7265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.720 00:21:22.720 Run status group 0 (all jobs): 00:21:22.720 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=219MiB (230MB), run=2009-2009msec 00:21:22.720 WRITE: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=114MiB (119MB), run=1783-1783msec 00:21:22.720 ----------------------------------------------------- 00:21:22.720 Suppressions used: 00:21:22.720 count bytes template 00:21:22.720 1 57 /usr/src/fio/parse.c 00:21:22.720 217 20832 /usr/src/fio/iolog.c 00:21:22.720 1 8 libtcmalloc_minimal.so 00:21:22.720 ----------------------------------------------------- 00:21:22.720 00:21:22.720 05:39:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:22.979 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:21:23.238 Nvme0n1 00:21:23.496 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=ce2bdb8b-7821-4304-af69-1f1954476f75 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb ce2bdb8b-7821-4304-af69-1f1954476f75 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=ce2bdb8b-7821-4304-af69-1f1954476f75 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:23.755 { 00:21:23.755 "uuid": "ce2bdb8b-7821-4304-af69-1f1954476f75", 00:21:23.755 "name": "lvs_0", 00:21:23.755 "base_bdev": "Nvme0n1", 00:21:23.755 "total_data_clusters": 4, 00:21:23.755 "free_clusters": 4, 00:21:23.755 "block_size": 4096, 00:21:23.755 "cluster_size": 1073741824 00:21:23.755 } 00:21:23.755 ]' 00:21:23.755 05:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ce2bdb8b-7821-4304-af69-1f1954476f75") .free_clusters' 00:21:24.014 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:21:24.014 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ce2bdb8b-7821-4304-af69-1f1954476f75") .cluster_size' 00:21:24.014 4096 00:21:24.014 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:21:24.014 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:21:24.014 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:21:24.014 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:24.273 a6c1a1bb-4d7d-4b53-a7e4-8179052b795e 00:21:24.273 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:24.532 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:24.791 05:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:25.053 05:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:25.053 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:25.053 fio-3.35 00:21:25.053 Starting 1 thread 00:21:27.586 00:21:27.586 test: (groupid=0, jobs=1): err= 0: pid=82733: Mon Dec 16 05:39:07 2024 00:21:27.586 read: IOPS=5121, BW=20.0MiB/s (21.0MB/s)(40.2MiB/2011msec) 00:21:27.586 slat (usec): min=2, max=340, avg= 3.73, stdev= 4.60 00:21:27.586 clat (usec): min=3678, max=20314, avg=13012.45, stdev=1067.44 00:21:27.586 lat (usec): min=3689, max=20317, avg=13016.18, stdev=1066.99 00:21:27.586 clat percentiles (usec): 00:21:27.586 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:21:27.587 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:21:27.587 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:21:27.587 | 99.00th=[15533], 99.50th=[16057], 99.90th=[18220], 99.95th=[20055], 00:21:27.587 | 99.99th=[20317] 00:21:27.587 bw ( KiB/s): min=19560, max=20960, per=100.00%, avg=20492.00, stdev=631.22, samples=4 00:21:27.587 iops : min= 4890, max= 5240, avg=5123.00, stdev=157.81, samples=4 00:21:27.587 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(40.2MiB/2011msec); 0 zone resets 00:21:27.587 slat (usec): min=2, max=170, avg= 3.84, stdev= 3.27 00:21:27.587 clat (usec): min=2498, max=20471, avg=11861.34, stdev=1023.92 00:21:27.587 lat (usec): min=2516, max=20475, avg=11865.18, stdev=1023.65 00:21:27.587 clat percentiles (usec): 00:21:27.587 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:21:27.587 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:21:27.587 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:21:27.587 | 99.00th=[14222], 99.50th=[14746], 99.90th=[18220], 99.95th=[18482], 00:21:27.587 | 99.99th=[20317] 00:21:27.587 bw ( KiB/s): min=20248, max=20672, per=99.84%, avg=20424.00, stdev=178.05, samples=4 00:21:27.587 iops : min= 5062, max= 5168, avg=5106.00, stdev=44.51, samples=4 00:21:27.587 lat (msec) : 4=0.04%, 10=1.07%, 20=98.83%, 50=0.05% 00:21:27.587 cpu : usr=72.79%, sys=21.04%, ctx=6, majf=0, minf=1553 00:21:27.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:27.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.587 issued rwts: total=10300,10285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.587 00:21:27.587 Run status group 0 (all jobs): 00:21:27.587 READ: bw=20.0MiB/s (21.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=40.2MiB (42.2MB), run=2011-2011msec 00:21:27.587 WRITE: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=40.2MiB (42.1MB), run=2011-2011msec 00:21:27.845 ----------------------------------------------------- 00:21:27.845 Suppressions used: 00:21:27.845 count bytes template 00:21:27.845 1 58 /usr/src/fio/parse.c 00:21:27.845 1 8 libtcmalloc_minimal.so 00:21:27.845 ----------------------------------------------------- 00:21:27.845 00:21:27.845 05:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:28.104 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:28.362 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=24e97938-5aa5-4cce-9a0f-d77123e3aae1 00:21:28.362 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 24e97938-5aa5-4cce-9a0f-d77123e3aae1 00:21:28.362 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=24e97938-5aa5-4cce-9a0f-d77123e3aae1 00:21:28.362 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:28.362 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:21:28.362 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:21:28.362 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:28.621 { 00:21:28.621 "uuid": "ce2bdb8b-7821-4304-af69-1f1954476f75", 00:21:28.621 "name": "lvs_0", 00:21:28.621 "base_bdev": "Nvme0n1", 00:21:28.621 "total_data_clusters": 4, 00:21:28.621 "free_clusters": 0, 00:21:28.621 "block_size": 4096, 00:21:28.621 "cluster_size": 1073741824 00:21:28.621 }, 00:21:28.621 { 00:21:28.621 "uuid": "24e97938-5aa5-4cce-9a0f-d77123e3aae1", 00:21:28.621 "name": "lvs_n_0", 00:21:28.621 "base_bdev": "a6c1a1bb-4d7d-4b53-a7e4-8179052b795e", 00:21:28.621 "total_data_clusters": 1022, 00:21:28.621 "free_clusters": 1022, 00:21:28.621 "block_size": 4096, 00:21:28.621 "cluster_size": 4194304 00:21:28.621 } 00:21:28.621 ]' 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="24e97938-5aa5-4cce-9a0f-d77123e3aae1") .free_clusters' 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="24e97938-5aa5-4cce-9a0f-d77123e3aae1") .cluster_size' 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:21:28.621 4088 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:21:28.621 05:39:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:28.879 cf6ee6ef-9e2e-47b2-a5cc-81ef63366bf2 00:21:28.879 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:29.138 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:29.397 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:29.655 05:39:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:29.913 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:29.913 fio-3.35 00:21:29.913 Starting 1 thread 00:21:32.469 00:21:32.469 test: (groupid=0, jobs=1): err= 0: pid=82803: Mon Dec 16 05:39:12 2024 00:21:32.469 read: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(35.6MiB/2013msec) 00:21:32.469 slat (usec): min=2, max=225, avg= 3.98, stdev= 3.92 00:21:32.469 clat (usec): min=4075, max=25298, avg=14709.11, stdev=1286.74 00:21:32.469 lat (usec): min=4081, max=25301, avg=14713.09, stdev=1286.41 00:21:32.469 clat percentiles (usec): 00:21:32.469 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:21:32.469 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:21:32.469 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:21:32.469 | 99.00th=[17695], 99.50th=[18220], 99.90th=[23725], 99.95th=[23987], 00:21:32.469 | 99.99th=[25297] 00:21:32.469 bw ( KiB/s): min=17224, max=18448, per=99.98%, avg=18106.00, stdev=588.98, samples=4 00:21:32.469 iops : min= 4306, max= 4612, avg=4526.50, stdev=147.24, samples=4 00:21:32.469 write: IOPS=4533, BW=17.7MiB/s (18.6MB/s)(35.6MiB/2013msec); 0 zone resets 00:21:32.469 slat (usec): min=2, max=269, avg= 4.04, stdev= 4.31 00:21:32.469 clat (usec): min=2478, max=25447, avg=13352.46, stdev=1259.75 00:21:32.469 lat (usec): min=2490, max=25450, avg=13356.50, stdev=1259.51 00:21:32.469 clat percentiles (usec): 00:21:32.469 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:21:32.469 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:21:32.469 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:21:32.469 | 99.00th=[16057], 99.50th=[16909], 99.90th=[23725], 99.95th=[25035], 00:21:32.469 | 99.99th=[25560] 00:21:32.469 bw ( KiB/s): min=17936, max=18200, per=99.92%, avg=18120.00, stdev=123.42, samples=4 00:21:32.469 iops : min= 4484, max= 4550, avg=4530.00, stdev=30.85, samples=4 00:21:32.469 lat (msec) : 4=0.02%, 10=0.32%, 20=99.38%, 50=0.29% 00:21:32.469 cpu : usr=73.31%, sys=20.58%, ctx=19, majf=0, minf=1553 00:21:32.469 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:32.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.469 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:32.469 issued rwts: total=9114,9126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:32.469 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:32.469 00:21:32.469 Run status group 0 (all jobs): 00:21:32.469 READ: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=35.6MiB (37.3MB), run=2013-2013msec 00:21:32.469 WRITE: bw=17.7MiB/s (18.6MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=35.6MiB (37.4MB), run=2013-2013msec 00:21:32.469 ----------------------------------------------------- 00:21:32.469 Suppressions used: 00:21:32.469 count bytes template 00:21:32.469 1 58 /usr/src/fio/parse.c 00:21:32.469 1 8 libtcmalloc_minimal.so 00:21:32.469 ----------------------------------------------------- 00:21:32.469 00:21:32.469 05:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:32.729 05:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:21:32.729 05:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:32.988 05:39:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:33.247 05:39:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:33.505 05:39:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:33.764 05:39:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.332 rmmod nvme_tcp 00:21:34.332 rmmod nvme_fabrics 00:21:34.332 rmmod nvme_keyring 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 82507 ']' 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 82507 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 82507 ']' 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 82507 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82507 00:21:34.332 killing process with pid 82507 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82507' 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 82507 00:21:34.332 05:39:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 82507 00:21:35.709 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:21:35.710 00:21:35.710 real 0m21.873s 00:21:35.710 user 1m34.347s 00:21:35.710 sys 0m4.734s 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.710 ************************************ 00:21:35.710 END TEST nvmf_fio_host 00:21:35.710 ************************************ 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.710 ************************************ 00:21:35.710 START TEST nvmf_failover 00:21:35.710 ************************************ 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:35.710 * Looking for test storage... 00:21:35.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:21:35.710 05:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.970 --rc genhtml_branch_coverage=1 00:21:35.970 --rc genhtml_function_coverage=1 00:21:35.970 --rc genhtml_legend=1 00:21:35.970 --rc geninfo_all_blocks=1 00:21:35.970 --rc geninfo_unexecuted_blocks=1 00:21:35.970 00:21:35.970 ' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.970 --rc genhtml_branch_coverage=1 00:21:35.970 --rc genhtml_function_coverage=1 00:21:35.970 --rc genhtml_legend=1 00:21:35.970 --rc geninfo_all_blocks=1 00:21:35.970 --rc geninfo_unexecuted_blocks=1 00:21:35.970 00:21:35.970 ' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.970 --rc genhtml_branch_coverage=1 00:21:35.970 --rc genhtml_function_coverage=1 00:21:35.970 --rc genhtml_legend=1 00:21:35.970 --rc geninfo_all_blocks=1 00:21:35.970 --rc geninfo_unexecuted_blocks=1 00:21:35.970 00:21:35.970 ' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.970 --rc genhtml_branch_coverage=1 00:21:35.970 --rc genhtml_function_coverage=1 00:21:35.970 --rc genhtml_legend=1 00:21:35.970 --rc geninfo_all_blocks=1 00:21:35.970 --rc geninfo_unexecuted_blocks=1 00:21:35.970 00:21:35.970 ' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.970 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:35.971 Cannot find device "nvmf_init_br" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:35.971 Cannot find device "nvmf_init_br2" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:35.971 Cannot find device "nvmf_tgt_br" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.971 Cannot find device "nvmf_tgt_br2" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:35.971 Cannot find device "nvmf_init_br" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:35.971 Cannot find device "nvmf_init_br2" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:35.971 Cannot find device "nvmf_tgt_br" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:35.971 Cannot find device "nvmf_tgt_br2" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:35.971 Cannot find device "nvmf_br" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:35.971 Cannot find device "nvmf_init_if" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:35.971 Cannot find device "nvmf_init_if2" 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:35.971 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:36.230 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.230 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.230 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:36.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:36.231 00:21:36.231 --- 10.0.0.3 ping statistics --- 00:21:36.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.231 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:36.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:36.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:21:36.231 00:21:36.231 --- 10.0.0.4 ping statistics --- 00:21:36.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.231 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:36.231 00:21:36.231 --- 10.0.0.1 ping statistics --- 00:21:36.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.231 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:36.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:21:36.231 00:21:36.231 --- 10.0.0.2 ping statistics --- 00:21:36.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.231 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=83111 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 83111 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 83111 ']' 00:21:36.231 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.490 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.490 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.490 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.490 05:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:36.490 [2024-12-16 05:39:16.585268] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:36.490 [2024-12-16 05:39:16.585878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.749 [2024-12-16 05:39:16.753878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:36.749 [2024-12-16 05:39:16.837008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.749 [2024-12-16 05:39:16.837258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.749 [2024-12-16 05:39:16.837353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.749 [2024-12-16 05:39:16.837447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.749 [2024-12-16 05:39:16.837528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.749 [2024-12-16 05:39:16.839246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.749 [2024-12-16 05:39:16.839386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.749 [2024-12-16 05:39:16.839427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.749 [2024-12-16 05:39:16.995329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:37.317 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.317 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:37.317 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.317 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.317 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:37.317 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.317 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:37.577 [2024-12-16 05:39:17.791750] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.577 05:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:37.836 Malloc0 00:21:38.095 05:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.095 05:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.354 05:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:38.612 [2024-12-16 05:39:18.807189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:38.612 05:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:38.889 [2024-12-16 05:39:19.051415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:38.889 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:39.166 [2024-12-16 05:39:19.287654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=83164 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 83164 /var/tmp/bdevperf.sock 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 83164 ']' 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.166 05:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:40.108 05:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.108 05:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:40.108 05:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:40.675 NVMe0n1 00:21:40.675 05:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:40.934 00:21:40.934 05:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=83188 00:21:40.934 05:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.934 05:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:41.872 05:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:42.131 [2024-12-16 05:39:22.222505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:42.131 [2024-12-16 05:39:22.222581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:42.131 [2024-12-16 05:39:22.222596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:42.131 05:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:45.420 05:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:45.420 00:21:45.420 05:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:45.679 05:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:48.966 05:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:48.966 [2024-12-16 05:39:29.123376] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:48.966 05:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:49.902 05:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:50.470 05:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 83188 00:21:57.044 { 00:21:57.044 "results": [ 00:21:57.044 { 00:21:57.044 "job": "NVMe0n1", 00:21:57.044 "core_mask": "0x1", 00:21:57.044 "workload": "verify", 00:21:57.044 "status": "finished", 00:21:57.044 "verify_range": { 00:21:57.044 "start": 0, 00:21:57.044 "length": 16384 00:21:57.044 }, 00:21:57.044 "queue_depth": 128, 00:21:57.044 "io_size": 4096, 00:21:57.044 "runtime": 15.0119, 00:21:57.044 "iops": 8004.4498031561625, 00:21:57.044 "mibps": 31.26738204357876, 00:21:57.044 "io_failed": 3301, 00:21:57.044 "io_timeout": 0, 00:21:57.044 "avg_latency_us": 15531.921521132941, 00:21:57.044 "min_latency_us": 681.4254545454545, 00:21:57.044 "max_latency_us": 17277.672727272726 00:21:57.044 } 00:21:57.044 ], 00:21:57.044 "core_count": 1 00:21:57.044 } 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 83164 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 83164 ']' 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 83164 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83164 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.044 killing process with pid 83164 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83164' 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 83164 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 83164 00:21:57.044 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:57.044 [2024-12-16 05:39:19.389028] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:57.044 [2024-12-16 05:39:19.389210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83164 ] 00:21:57.044 [2024-12-16 05:39:19.557034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.044 [2024-12-16 05:39:19.647226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.044 [2024-12-16 05:39:19.804121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:57.044 Running I/O for 15 seconds... 00:21:57.044 7888.00 IOPS, 30.81 MiB/s [2024-12-16T05:39:37.303Z] [2024-12-16 05:39:22.222750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.044 [2024-12-16 05:39:22.222811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.044 [2024-12-16 05:39:22.222856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.044 [2024-12-16 05:39:22.222878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.044 [2024-12-16 05:39:22.222903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.044 [2024-12-16 05:39:22.222923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.044 [2024-12-16 05:39:22.222945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.044 [2024-12-16 05:39:22.222964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.044 [2024-12-16 05:39:22.222987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.044 [2024-12-16 05:39:22.223006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.223047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.223091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.223135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.223176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.223217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.223277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.223321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.223973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.223994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.045 [2024-12-16 05:39:22.224511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.045 [2024-12-16 05:39:22.224926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.045 [2024-12-16 05:39:22.224948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.224967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.224988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.225569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.225962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.225981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.046 [2024-12-16 05:39:22.226634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.046 [2024-12-16 05:39:22.226676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.046 [2024-12-16 05:39:22.226698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.226717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.226748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.226768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.226792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.226811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.226833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.226852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.226874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.226893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.226914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.226950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.226973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.226992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.047 [2024-12-16 05:39:22.227730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.227964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.227984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.047 [2024-12-16 05:39:22.228443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.047 [2024-12-16 05:39:22.228466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:22.228485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.228515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:22.228536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.228558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:22.228578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.228600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:22.228632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.228696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.048 [2024-12-16 05:39:22.228718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.048 [2024-12-16 05:39:22.228735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74712 len:8 PRP1 0x0 PRP2 0x0 00:21:57.048 [2024-12-16 05:39:22.228753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.229016] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:21:57.048 [2024-12-16 05:39:22.229092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.048 [2024-12-16 05:39:22.229120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.229146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.048 [2024-12-16 05:39:22.229164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.229182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.048 [2024-12-16 05:39:22.229199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.229217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.048 [2024-12-16 05:39:22.229235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:22.229258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:57.048 [2024-12-16 05:39:22.233025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:57.048 [2024-12-16 05:39:22.233093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:21:57.048 [2024-12-16 05:39:22.257073] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:57.048 7814.00 IOPS, 30.52 MiB/s [2024-12-16T05:39:37.307Z] 7911.00 IOPS, 30.90 MiB/s [2024-12-16T05:39:37.307Z] 7953.25 IOPS, 31.07 MiB/s [2024-12-16T05:39:37.307Z] [2024-12-16 05:39:25.829625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.829714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.829766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.829808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.829832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.829852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.829871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.829889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.829908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.829926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.829945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.829963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.829982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.829999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.048 [2024-12-16 05:39:25.830595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.830667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.830709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.830748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.048 [2024-12-16 05:39:25.830788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.048 [2024-12-16 05:39:25.830809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.830837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.830860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.830880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.830900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.830918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.830939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.830974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.830993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.049 [2024-12-16 05:39:25.831395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.831978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.831998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.832016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.832036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.832054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.832073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.832091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.832111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.832129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.049 [2024-12-16 05:39:25.832148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.049 [2024-12-16 05:39:25.832166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.832650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.832979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.832996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.833033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.833071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.833109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.833145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.833182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.833218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.050 [2024-12-16 05:39:25.833255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.050 [2024-12-16 05:39:25.833756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.050 [2024-12-16 05:39:25.833774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.833794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.833812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.833831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.833849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.833869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.833887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.833907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.833925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.833944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.833973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.833995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:25.834651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.051 [2024-12-16 05:39:25.834918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.834937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:21:57.051 [2024-12-16 05:39:25.834968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.051 [2024-12-16 05:39:25.834986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.051 [2024-12-16 05:39:25.835002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49712 len:8 PRP1 0x0 PRP2 0x0 00:21:57.051 [2024-12-16 05:39:25.835020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.835258] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:21:57.051 [2024-12-16 05:39:25.835332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.051 [2024-12-16 05:39:25.835360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.835380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.051 [2024-12-16 05:39:25.835398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.835417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.051 [2024-12-16 05:39:25.835434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.835452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.051 [2024-12-16 05:39:25.835469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:25.835486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:57.051 [2024-12-16 05:39:25.835537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:21:57.051 [2024-12-16 05:39:25.839189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:57.051 [2024-12-16 05:39:25.865376] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:21:57.051 7899.20 IOPS, 30.86 MiB/s [2024-12-16T05:39:37.310Z] 7926.67 IOPS, 30.96 MiB/s [2024-12-16T05:39:37.310Z] 7951.00 IOPS, 31.06 MiB/s [2024-12-16T05:39:37.310Z] 7966.12 IOPS, 31.12 MiB/s [2024-12-16T05:39:37.310Z] 7972.44 IOPS, 31.14 MiB/s [2024-12-16T05:39:37.310Z] [2024-12-16 05:39:30.410456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.051 [2024-12-16 05:39:30.410544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.051 [2024-12-16 05:39:30.410583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.410619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.410664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.410701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.410763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.410804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.410841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.410877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.410914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.410952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.410971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.410989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.411829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.411866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.411930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.411971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.411991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.412009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.412028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.412046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.412065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.412084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.412103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.412122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.412141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.052 [2024-12-16 05:39:30.412159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.412179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.412197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.052 [2024-12-16 05:39:30.412216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.052 [2024-12-16 05:39:30.412234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.412828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.412867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.412904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.412957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.412977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.412995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.053 [2024-12-16 05:39:30.413600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.053 [2024-12-16 05:39:30.413653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.053 [2024-12-16 05:39:30.413673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.413690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.413727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.413764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.413811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.413849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.413886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.413923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.413959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.413978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.413995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.054 [2024-12-16 05:39:30.414634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.414975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.414995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.415013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.415034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.415069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.415089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.415108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.415129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.415148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.415169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.415189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.415209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.415227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.415248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.054 [2024-12-16 05:39:30.415267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.054 [2024-12-16 05:39:30.415286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:21:57.054 [2024-12-16 05:39:30.415310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.054 [2024-12-16 05:39:30.415334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.054 [2024-12-16 05:39:30.415353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88944 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89384 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89392 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89400 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89408 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89416 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.415946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.415962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.415977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.415994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.416027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.416041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.416055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.416072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.416093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.055 [2024-12-16 05:39:30.416108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.055 [2024-12-16 05:39:30.416123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:21:57.055 [2024-12-16 05:39:30.416140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.416409] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:21:57.055 [2024-12-16 05:39:30.416481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.055 [2024-12-16 05:39:30.416508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.416542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.055 [2024-12-16 05:39:30.416560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.416578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.055 [2024-12-16 05:39:30.416595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.416612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.055 [2024-12-16 05:39:30.416629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.055 [2024-12-16 05:39:30.416658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:57.055 [2024-12-16 05:39:30.420679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:57.055 [2024-12-16 05:39:30.420732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:21:57.055 [2024-12-16 05:39:30.450798] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:21:57.055 7940.90 IOPS, 31.02 MiB/s [2024-12-16T05:39:37.314Z] 7956.55 IOPS, 31.08 MiB/s [2024-12-16T05:39:37.314Z] 7970.17 IOPS, 31.13 MiB/s [2024-12-16T05:39:37.314Z] 7980.38 IOPS, 31.17 MiB/s [2024-12-16T05:39:37.314Z] 7992.00 IOPS, 31.22 MiB/s [2024-12-16T05:39:37.314Z] 8002.80 IOPS, 31.26 MiB/s 00:21:57.055 Latency(us) 00:21:57.055 [2024-12-16T05:39:37.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.055 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:57.055 Verification LBA range: start 0x0 length 0x4000 00:21:57.055 NVMe0n1 : 15.01 8004.45 31.27 219.89 0.00 15531.92 681.43 17277.67 00:21:57.055 [2024-12-16T05:39:37.314Z] =================================================================================================================== 00:21:57.055 [2024-12-16T05:39:37.314Z] Total : 8004.45 31.27 219.89 0.00 15531.92 681.43 17277.67 00:21:57.055 Received shutdown signal, test time was about 15.000000 seconds 00:21:57.055 00:21:57.055 Latency(us) 00:21:57.055 [2024-12-16T05:39:37.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.055 [2024-12-16T05:39:37.314Z] =================================================================================================================== 00:21:57.055 [2024-12-16T05:39:37.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=83371 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 83371 /var/tmp/bdevperf.sock 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 83371 ']' 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.055 05:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:57.992 05:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.992 05:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:57.992 05:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:58.251 [2024-12-16 05:39:38.269242] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:58.251 05:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:58.510 [2024-12-16 05:39:38.545500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:21:58.510 05:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:58.769 NVMe0n1 00:21:58.769 05:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:59.028 00:21:59.028 05:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:59.286 00:21:59.286 05:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.286 05:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:59.545 05:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:59.804 05:39:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:03.091 05:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:03.091 05:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:03.091 05:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=83451 00:22:03.092 05:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 83451 00:22:03.092 05:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.468 { 00:22:04.468 "results": [ 00:22:04.468 { 00:22:04.468 "job": "NVMe0n1", 00:22:04.468 "core_mask": "0x1", 00:22:04.468 "workload": "verify", 00:22:04.468 "status": "finished", 00:22:04.468 "verify_range": { 00:22:04.468 "start": 0, 00:22:04.468 "length": 16384 00:22:04.468 }, 00:22:04.468 "queue_depth": 128, 00:22:04.468 "io_size": 4096, 00:22:04.468 "runtime": 1.021652, 00:22:04.468 "iops": 6283.940128341157, 00:22:04.468 "mibps": 24.546641126332645, 00:22:04.468 "io_failed": 0, 00:22:04.468 "io_timeout": 0, 00:22:04.468 "avg_latency_us": 20293.30661682243, 00:22:04.468 "min_latency_us": 2710.807272727273, 00:22:04.468 "max_latency_us": 18826.705454545456 00:22:04.468 } 00:22:04.468 ], 00:22:04.468 "core_count": 1 00:22:04.468 } 00:22:04.468 05:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:04.468 [2024-12-16 05:39:37.106122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:04.468 [2024-12-16 05:39:37.106299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83371 ] 00:22:04.468 [2024-12-16 05:39:37.286487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.468 [2024-12-16 05:39:37.379845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.468 [2024-12-16 05:39:37.534179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:04.468 [2024-12-16 05:39:39.995077] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:04.468 [2024-12-16 05:39:39.995213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.468 [2024-12-16 05:39:39.995251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.469 [2024-12-16 05:39:39.995277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.469 [2024-12-16 05:39:39.995299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.469 [2024-12-16 05:39:39.995318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.469 [2024-12-16 05:39:39.995338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.469 [2024-12-16 05:39:39.995357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.469 [2024-12-16 05:39:39.995376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.469 [2024-12-16 05:39:39.995399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:04.469 [2024-12-16 05:39:39.995476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:04.469 [2024-12-16 05:39:39.995522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:04.469 [2024-12-16 05:39:40.006442] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:04.469 Running I/O for 1 seconds... 00:22:04.469 6292.00 IOPS, 24.58 MiB/s 00:22:04.469 Latency(us) 00:22:04.469 [2024-12-16T05:39:44.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.469 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:04.469 Verification LBA range: start 0x0 length 0x4000 00:22:04.469 NVMe0n1 : 1.02 6283.94 24.55 0.00 0.00 20293.31 2710.81 18826.71 00:22:04.469 [2024-12-16T05:39:44.728Z] =================================================================================================================== 00:22:04.469 [2024-12-16T05:39:44.728Z] Total : 6283.94 24.55 0.00 0.00 20293.31 2710.81 18826.71 00:22:04.469 05:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.469 05:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:04.728 05:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.728 05:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:04.728 05:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.293 05:39:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:05.293 05:39:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 83371 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 83371 ']' 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 83371 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83371 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:08.617 killing process with pid 83371 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83371' 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 83371 00:22:08.617 05:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 83371 00:22:09.554 05:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:09.554 05:39:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.813 rmmod nvme_tcp 00:22:09.813 rmmod nvme_fabrics 00:22:09.813 rmmod nvme_keyring 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 83111 ']' 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 83111 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 83111 ']' 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 83111 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.813 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83111 00:22:10.072 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:10.072 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:10.072 killing process with pid 83111 00:22:10.072 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83111' 00:22:10.072 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 83111 00:22:10.072 05:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 83111 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.009 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.010 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.268 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:11.268 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.268 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:11.269 00:22:11.269 real 0m35.442s 00:22:11.269 user 2m15.368s 00:22:11.269 sys 0m5.666s 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:11.269 ************************************ 00:22:11.269 END TEST nvmf_failover 00:22:11.269 ************************************ 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.269 ************************************ 00:22:11.269 START TEST nvmf_host_discovery 00:22:11.269 ************************************ 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:11.269 * Looking for test storage... 00:22:11.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:11.269 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:11.528 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.529 --rc genhtml_branch_coverage=1 00:22:11.529 --rc genhtml_function_coverage=1 00:22:11.529 --rc genhtml_legend=1 00:22:11.529 --rc geninfo_all_blocks=1 00:22:11.529 --rc geninfo_unexecuted_blocks=1 00:22:11.529 00:22:11.529 ' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.529 --rc genhtml_branch_coverage=1 00:22:11.529 --rc genhtml_function_coverage=1 00:22:11.529 --rc genhtml_legend=1 00:22:11.529 --rc geninfo_all_blocks=1 00:22:11.529 --rc geninfo_unexecuted_blocks=1 00:22:11.529 00:22:11.529 ' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.529 --rc genhtml_branch_coverage=1 00:22:11.529 --rc genhtml_function_coverage=1 00:22:11.529 --rc genhtml_legend=1 00:22:11.529 --rc geninfo_all_blocks=1 00:22:11.529 --rc geninfo_unexecuted_blocks=1 00:22:11.529 00:22:11.529 ' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.529 --rc genhtml_branch_coverage=1 00:22:11.529 --rc genhtml_function_coverage=1 00:22:11.529 --rc genhtml_legend=1 00:22:11.529 --rc geninfo_all_blocks=1 00:22:11.529 --rc geninfo_unexecuted_blocks=1 00:22:11.529 00:22:11.529 ' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.529 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:11.529 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:11.530 Cannot find device "nvmf_init_br" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:11.530 Cannot find device "nvmf_init_br2" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:11.530 Cannot find device "nvmf_tgt_br" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.530 Cannot find device "nvmf_tgt_br2" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:11.530 Cannot find device "nvmf_init_br" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:11.530 Cannot find device "nvmf_init_br2" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:11.530 Cannot find device "nvmf_tgt_br" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:11.530 Cannot find device "nvmf_tgt_br2" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:11.530 Cannot find device "nvmf_br" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:11.530 Cannot find device "nvmf_init_if" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:11.530 Cannot find device "nvmf_init_if2" 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.530 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:11.530 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:11.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:11.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:11.789 00:22:11.789 --- 10.0.0.3 ping statistics --- 00:22:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.789 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:11.789 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:11.789 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:22:11.789 00:22:11.789 --- 10.0.0.4 ping statistics --- 00:22:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.789 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:11.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:11.789 00:22:11.789 --- 10.0.0.1 ping statistics --- 00:22:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.789 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:11.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:22:11.789 00:22:11.789 --- 10.0.0.2 ping statistics --- 00:22:11.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.789 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.789 05:39:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.789 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:11.789 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.789 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.789 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=83785 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 83785 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 83785 ']' 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.790 05:39:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.048 [2024-12-16 05:39:52.144083] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:12.049 [2024-12-16 05:39:52.144258] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.307 [2024-12-16 05:39:52.316012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.307 [2024-12-16 05:39:52.398679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.307 [2024-12-16 05:39:52.398732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.307 [2024-12-16 05:39:52.398748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.307 [2024-12-16 05:39:52.398769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.307 [2024-12-16 05:39:52.398781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.307 [2024-12-16 05:39:52.399735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.307 [2024-12-16 05:39:52.543814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:12.876 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.876 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:12.876 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.876 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.876 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.135 [2024-12-16 05:39:53.172332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.135 [2024-12-16 05:39:53.180523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.135 null0 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.135 null1 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.135 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=83816 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 83816 /tmp/host.sock 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 83816 ']' 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.135 05:39:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:13.135 [2024-12-16 05:39:53.305144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:13.135 [2024-12-16 05:39:53.305277] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83816 ] 00:22:13.394 [2024-12-16 05:39:53.478829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.394 [2024-12-16 05:39:53.603803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.653 [2024-12-16 05:39:53.798363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.222 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.481 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.482 [2024-12-16 05:39:54.608933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.482 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:14.741 05:39:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:15.001 [2024-12-16 05:39:55.257104] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:15.001 [2024-12-16 05:39:55.257161] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:15.001 [2024-12-16 05:39:55.257200] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:15.261 [2024-12-16 05:39:55.263168] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:15.261 [2024-12-16 05:39:55.324846] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:15.261 [2024-12-16 05:39:55.326403] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:22:15.261 [2024-12-16 05:39:55.328923] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:15.261 [2024-12-16 05:39:55.329005] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:15.261 [2024-12-16 05:39:55.335517] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:15.829 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:15.830 05:39:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:15.830 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:15.830 [2024-12-16 05:39:56.079479] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:22:16.089 [2024-12-16 05:39:56.087246] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:22:16.089 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.089 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.089 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.089 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:16.089 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:16.089 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.090 [2024-12-16 05:39:56.176170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:16.090 [2024-12-16 05:39:56.177021] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:16.090 [2024-12-16 05:39:56.177079] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:16.090 [2024-12-16 05:39:56.183096] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.090 [2024-12-16 05:39:56.241734] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:22:16.090 [2024-12-16 05:39:56.241804] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:16.090 [2024-12-16 05:39:56.241824] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:16.090 [2024-12-16 05:39:56.241836] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:16.090 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.350 [2024-12-16 05:39:56.405389] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:16.350 [2024-12-16 05:39:56.405450] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:16.350 [2024-12-16 05:39:56.407106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.350 [2024-12-16 05:39:56.407162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.350 [2024-12-16 05:39:56.407181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.350 [2024-12-16 05:39:56.407194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.350 [2024-12-16 05:39:56.407206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.350 [2024-12-16 05:39:56.407217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.350 [2024-12-16 05:39:56.407229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:16.350 [2024-12-16 05:39:56.407241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.350 [2024-12-16 05:39:56.407252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:16.350 [2024-12-16 05:39:56.411420] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:16.350 [2024-12-16 05:39:56.411466] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:16.350 [2024-12-16 05:39:56.411560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:16.350 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.351 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:16.610 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.611 05:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 [2024-12-16 05:39:57.818391] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:17.992 [2024-12-16 05:39:57.818440] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:17.992 [2024-12-16 05:39:57.818476] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:17.992 [2024-12-16 05:39:57.824455] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:22:17.992 [2024-12-16 05:39:57.883061] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:22:17.992 [2024-12-16 05:39:57.884269] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:22:17.992 [2024-12-16 05:39:57.886687] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:17.992 [2024-12-16 05:39:57.886754] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:17.992 [2024-12-16 05:39:57.889037] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 request: 00:22:17.992 { 00:22:17.992 "name": "nvme", 00:22:17.992 "trtype": "tcp", 00:22:17.992 "traddr": "10.0.0.3", 00:22:17.992 "adrfam": "ipv4", 00:22:17.992 "trsvcid": "8009", 00:22:17.992 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:17.992 "wait_for_attach": true, 00:22:17.992 "method": "bdev_nvme_start_discovery", 00:22:17.992 "req_id": 1 00:22:17.992 } 00:22:17.992 Got JSON-RPC error response 00:22:17.992 response: 00:22:17.992 { 00:22:17.992 "code": -17, 00:22:17.992 "message": "File exists" 00:22:17.992 } 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:17.992 05:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 request: 00:22:17.992 { 00:22:17.992 "name": "nvme_second", 00:22:17.992 "trtype": "tcp", 00:22:17.992 "traddr": "10.0.0.3", 00:22:17.992 "adrfam": "ipv4", 00:22:17.992 "trsvcid": "8009", 00:22:17.992 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:17.992 "wait_for_attach": true, 00:22:17.992 "method": "bdev_nvme_start_discovery", 00:22:17.992 "req_id": 1 00:22:17.992 } 00:22:17.992 Got JSON-RPC error response 00:22:17.992 response: 00:22:17.992 { 00:22:17.992 "code": -17, 00:22:17.992 "message": "File exists" 00:22:17.992 } 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.992 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.993 05:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.929 [2024-12-16 05:39:59.151298] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.929 [2024-12-16 05:39:59.151390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:22:18.929 [2024-12-16 05:39:59.151480] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:18.929 [2024-12-16 05:39:59.151496] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:18.929 [2024-12-16 05:39:59.151510] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:20.307 [2024-12-16 05:40:00.151324] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.307 [2024-12-16 05:40:00.151429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:22:20.307 [2024-12-16 05:40:00.151482] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:20.307 [2024-12-16 05:40:00.151497] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:20.307 [2024-12-16 05:40:00.151510] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:21.243 [2024-12-16 05:40:01.151102] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:22:21.243 request: 00:22:21.243 { 00:22:21.243 "name": "nvme_second", 00:22:21.243 "trtype": "tcp", 00:22:21.243 "traddr": "10.0.0.3", 00:22:21.243 "adrfam": "ipv4", 00:22:21.243 "trsvcid": "8010", 00:22:21.243 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:21.243 "wait_for_attach": false, 00:22:21.243 "attach_timeout_ms": 3000, 00:22:21.243 "method": "bdev_nvme_start_discovery", 00:22:21.243 "req_id": 1 00:22:21.243 } 00:22:21.243 Got JSON-RPC error response 00:22:21.243 response: 00:22:21.243 { 00:22:21.243 "code": -110, 00:22:21.243 "message": "Connection timed out" 00:22:21.243 } 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 83816 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.243 rmmod nvme_tcp 00:22:21.243 rmmod nvme_fabrics 00:22:21.243 rmmod nvme_keyring 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 83785 ']' 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 83785 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 83785 ']' 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 83785 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83785 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.243 killing process with pid 83785 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83785' 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 83785 00:22:21.243 05:40:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 83785 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.180 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:22:22.440 00:22:22.440 real 0m11.080s 00:22:22.440 user 0m20.757s 00:22:22.440 sys 0m2.081s 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 ************************************ 00:22:22.440 END TEST nvmf_host_discovery 00:22:22.440 ************************************ 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.440 ************************************ 00:22:22.440 START TEST nvmf_host_multipath_status 00:22:22.440 ************************************ 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:22.440 * Looking for test storage... 00:22:22.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.440 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.441 --rc genhtml_branch_coverage=1 00:22:22.441 --rc genhtml_function_coverage=1 00:22:22.441 --rc genhtml_legend=1 00:22:22.441 --rc geninfo_all_blocks=1 00:22:22.441 --rc geninfo_unexecuted_blocks=1 00:22:22.441 00:22:22.441 ' 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.441 --rc genhtml_branch_coverage=1 00:22:22.441 --rc genhtml_function_coverage=1 00:22:22.441 --rc genhtml_legend=1 00:22:22.441 --rc geninfo_all_blocks=1 00:22:22.441 --rc geninfo_unexecuted_blocks=1 00:22:22.441 00:22:22.441 ' 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.441 --rc genhtml_branch_coverage=1 00:22:22.441 --rc genhtml_function_coverage=1 00:22:22.441 --rc genhtml_legend=1 00:22:22.441 --rc geninfo_all_blocks=1 00:22:22.441 --rc geninfo_unexecuted_blocks=1 00:22:22.441 00:22:22.441 ' 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:22.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.441 --rc genhtml_branch_coverage=1 00:22:22.441 --rc genhtml_function_coverage=1 00:22:22.441 --rc genhtml_legend=1 00:22:22.441 --rc geninfo_all_blocks=1 00:22:22.441 --rc geninfo_unexecuted_blocks=1 00:22:22.441 00:22:22.441 ' 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.441 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:22.701 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.701 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:22.702 Cannot find device "nvmf_init_br" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:22.702 Cannot find device "nvmf_init_br2" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:22.702 Cannot find device "nvmf_tgt_br" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.702 Cannot find device "nvmf_tgt_br2" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:22.702 Cannot find device "nvmf_init_br" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:22.702 Cannot find device "nvmf_init_br2" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:22.702 Cannot find device "nvmf_tgt_br" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:22.702 Cannot find device "nvmf_tgt_br2" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:22.702 Cannot find device "nvmf_br" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:22.702 Cannot find device "nvmf_init_if" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:22.702 Cannot find device "nvmf_init_if2" 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:22.702 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:22.961 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:22.961 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:22.961 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:22.961 05:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:22.961 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:22.962 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:22.962 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:22:22.962 00:22:22.962 --- 10.0.0.3 ping statistics --- 00:22:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.962 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:22.962 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:22.962 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:22:22.962 00:22:22.962 --- 10.0.0.4 ping statistics --- 00:22:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.962 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:22.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:22.962 00:22:22.962 --- 10.0.0.1 ping statistics --- 00:22:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.962 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:22.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:22:22.962 00:22:22.962 --- 10.0.0.2 ping statistics --- 00:22:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.962 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=84330 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 84330 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 84330 ']' 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:22.962 05:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:23.221 [2024-12-16 05:40:03.275640] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:23.221 [2024-12-16 05:40:03.275799] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.221 [2024-12-16 05:40:03.455799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:23.480 [2024-12-16 05:40:03.545584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.480 [2024-12-16 05:40:03.545645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.480 [2024-12-16 05:40:03.545662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.480 [2024-12-16 05:40:03.545684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.480 [2024-12-16 05:40:03.545702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.480 [2024-12-16 05:40:03.547291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.480 [2024-12-16 05:40:03.547306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.480 [2024-12-16 05:40:03.695816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=84330 00:22:24.047 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:24.306 [2024-12-16 05:40:04.488227] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.306 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:24.874 Malloc0 00:22:24.874 05:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:24.874 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.132 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:25.391 [2024-12-16 05:40:05.539567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:25.391 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:25.649 [2024-12-16 05:40:05.759599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=84386 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 84386 /var/tmp/bdevperf.sock 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 84386 ']' 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.649 05:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:26.586 05:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.586 05:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:26.586 05:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:26.844 05:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:27.103 Nvme0n1 00:22:27.103 05:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:27.670 Nvme0n1 00:22:27.670 05:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:27.670 05:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:29.575 05:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:29.575 05:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:29.834 05:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:30.093 05:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:31.029 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:31.029 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.029 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.029 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.287 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.287 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.287 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.287 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.546 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.546 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.546 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.546 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.805 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.805 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.805 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.805 05:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.083 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.083 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.083 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.083 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.403 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.403 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:32.403 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.403 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.662 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.662 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:32.662 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:32.922 05:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:33.181 05:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:34.117 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:34.117 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:34.117 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.117 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.375 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.375 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:34.376 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.376 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.634 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.634 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.634 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.634 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.893 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.893 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.893 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.893 05:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.152 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.152 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:35.152 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.152 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:35.410 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.410 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:35.410 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.410 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.668 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.668 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:35.669 05:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:35.927 05:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:22:36.186 05:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:37.121 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:37.122 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:37.122 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.122 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.380 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.380 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:37.380 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.380 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.639 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.639 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.639 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.639 05:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.898 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.898 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.898 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.898 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:38.157 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.157 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:38.157 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.157 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.415 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.415 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.415 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.415 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.672 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.672 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:38.672 05:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:38.931 05:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:39.190 05:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:40.125 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:40.125 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:40.125 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.125 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.384 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.384 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:40.384 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.384 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:40.642 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.642 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:40.642 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.642 05:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:40.900 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.900 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:40.900 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.900 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.159 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.159 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.159 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.159 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.418 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.418 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:41.418 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.418 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:41.676 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:41.676 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:41.676 05:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:41.934 05:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:42.192 05:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:43.127 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:43.127 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:43.127 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.127 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:43.385 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.385 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:43.385 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.385 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.644 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.644 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.644 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.644 05:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:44.212 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.212 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:44.212 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.212 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.471 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.730 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.730 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:44.730 05:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:44.989 05:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:45.248 05:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.624 05:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.883 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.883 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.883 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.883 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:47.142 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.142 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:47.142 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:47.142 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.402 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.402 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:47.402 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.402 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:47.661 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.661 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:47.661 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.661 05:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.919 05:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.919 05:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:48.178 05:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:48.178 05:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:48.437 05:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:48.695 05:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:49.631 05:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:49.631 05:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:49.631 05:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.631 05:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:49.890 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.890 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:49.890 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.890 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.458 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:51.025 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.025 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:51.025 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:51.025 05:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.025 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.025 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:51.025 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.025 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:51.593 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.593 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:51.593 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:51.852 05:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:51.852 05:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.229 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:53.489 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.489 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:53.489 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:53.489 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.748 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.748 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:53.748 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.748 05:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:54.023 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.023 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:54.023 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:54.023 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.294 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.294 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:54.294 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:54.294 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.552 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.552 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:54.553 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:54.811 05:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:22:55.070 05:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:56.014 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:56.014 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:56.014 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.014 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:56.273 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.273 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:56.273 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.273 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:56.532 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.532 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:56.532 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.532 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:56.791 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.791 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:56.791 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.791 05:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:57.050 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.051 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:57.051 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:57.051 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.309 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.309 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:57.309 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.309 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:57.568 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.568 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:57.568 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:57.827 05:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:58.086 05:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:59.023 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:59.023 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:59.023 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.023 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:59.282 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.282 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:59.282 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.282 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:59.541 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.541 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:59.541 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:59.541 05:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.109 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.109 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:00.109 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:00.109 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.368 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.368 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:00.368 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.368 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 84386 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 84386 ']' 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 84386 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.627 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84386 00:23:00.886 killing process with pid 84386 00:23:00.886 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:00.886 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:00.886 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84386' 00:23:00.886 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 84386 00:23:00.886 05:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 84386 00:23:00.886 { 00:23:00.886 "results": [ 00:23:00.886 { 00:23:00.886 "job": "Nvme0n1", 00:23:00.886 "core_mask": "0x4", 00:23:00.886 "workload": "verify", 00:23:00.886 "status": "terminated", 00:23:00.886 "verify_range": { 00:23:00.886 "start": 0, 00:23:00.886 "length": 16384 00:23:00.887 }, 00:23:00.887 "queue_depth": 128, 00:23:00.887 "io_size": 4096, 00:23:00.887 "runtime": 33.127354, 00:23:00.887 "iops": 7748.792734849877, 00:23:00.887 "mibps": 30.268721620507332, 00:23:00.887 "io_failed": 0, 00:23:00.887 "io_timeout": 0, 00:23:00.887 "avg_latency_us": 16486.541766745155, 00:23:00.887 "min_latency_us": 826.6472727272727, 00:23:00.887 "max_latency_us": 4026531.84 00:23:00.887 } 00:23:00.887 ], 00:23:00.887 "core_count": 1 00:23:00.887 } 00:23:01.827 05:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 84386 00:23:01.827 05:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:01.827 [2024-12-16 05:40:05.852178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:01.827 [2024-12-16 05:40:05.852363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84386 ] 00:23:01.827 [2024-12-16 05:40:06.018417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.827 [2024-12-16 05:40:06.104236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.827 [2024-12-16 05:40:06.254862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:01.827 Running I/O for 90 seconds... 00:23:01.827 6684.00 IOPS, 26.11 MiB/s [2024-12-16T05:40:42.086Z] 6670.00 IOPS, 26.05 MiB/s [2024-12-16T05:40:42.086Z] 6622.67 IOPS, 25.87 MiB/s [2024-12-16T05:40:42.086Z] 6567.25 IOPS, 25.65 MiB/s [2024-12-16T05:40:42.086Z] 6533.80 IOPS, 25.52 MiB/s [2024-12-16T05:40:42.086Z] 6808.83 IOPS, 26.60 MiB/s [2024-12-16T05:40:42.086Z] 7051.57 IOPS, 27.55 MiB/s [2024-12-16T05:40:42.086Z] 7220.88 IOPS, 28.21 MiB/s [2024-12-16T05:40:42.086Z] 7368.00 IOPS, 28.78 MiB/s [2024-12-16T05:40:42.086Z] 7492.80 IOPS, 29.27 MiB/s [2024-12-16T05:40:42.086Z] 7584.00 IOPS, 29.62 MiB/s [2024-12-16T05:40:42.086Z] 7670.00 IOPS, 29.96 MiB/s [2024-12-16T05:40:42.086Z] 7739.69 IOPS, 30.23 MiB/s [2024-12-16T05:40:42.086Z] 7788.57 IOPS, 30.42 MiB/s [2024-12-16T05:40:42.086Z] [2024-12-16 05:40:22.070114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.827 [2024-12-16 05:40:22.070200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.827 [2024-12-16 05:40:22.070286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.827 [2024-12-16 05:40:22.070318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.827 [2024-12-16 05:40:22.070348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.827 [2024-12-16 05:40:22.070369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:01.827 [2024-12-16 05:40:22.070396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.827 [2024-12-16 05:40:22.070416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:01.827 [2024-12-16 05:40:22.070443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.827 [2024-12-16 05:40:22.070462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.070519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.070564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.070626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.070677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.070745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.070791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.070837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.070901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.070948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.070975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.828 [2024-12-16 05:40:22.071507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.071576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.071641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.071701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.071767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.071816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.071865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.071914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.071961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.828 [2024-12-16 05:40:22.072498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:01.828 [2024-12-16 05:40:22.072525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.072940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.072968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.072989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.829 [2024-12-16 05:40:22.073968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.073994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.829 [2024-12-16 05:40:22.074499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:01.829 [2024-12-16 05:40:22.074529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.074549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.074613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.074694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.074753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.074801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.074849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.074899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.074946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.074974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.074994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.075411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.075913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.075966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.076017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.076050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.076073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.076104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.076127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.076157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.076179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.076211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.076239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.830 [2024-12-16 05:40:22.077213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.077278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.077333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.077387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.077441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.830 [2024-12-16 05:40:22.077548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:01.830 [2024-12-16 05:40:22.077582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.077614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.077683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.077727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.077763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.077785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.077819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.077840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.077874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.077895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.077929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.077949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.077983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.078003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.078037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.078058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.078092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.078113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:22.078165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:22.078186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:01.831 7422.93 IOPS, 29.00 MiB/s [2024-12-16T05:40:42.090Z] 6959.00 IOPS, 27.18 MiB/s [2024-12-16T05:40:42.090Z] 6549.65 IOPS, 25.58 MiB/s [2024-12-16T05:40:42.090Z] 6185.78 IOPS, 24.16 MiB/s [2024-12-16T05:40:42.090Z] 6180.58 IOPS, 24.14 MiB/s [2024-12-16T05:40:42.090Z] 6284.55 IOPS, 24.55 MiB/s [2024-12-16T05:40:42.090Z] 6421.95 IOPS, 25.09 MiB/s [2024-12-16T05:40:42.090Z] 6681.73 IOPS, 26.10 MiB/s [2024-12-16T05:40:42.090Z] 6899.91 IOPS, 26.95 MiB/s [2024-12-16T05:40:42.090Z] 7098.42 IOPS, 27.73 MiB/s [2024-12-16T05:40:42.090Z] 7158.16 IOPS, 27.96 MiB/s [2024-12-16T05:40:42.090Z] 7202.85 IOPS, 28.14 MiB/s [2024-12-16T05:40:42.090Z] 7236.52 IOPS, 28.27 MiB/s [2024-12-16T05:40:42.090Z] 7360.32 IOPS, 28.75 MiB/s [2024-12-16T05:40:42.090Z] 7498.48 IOPS, 29.29 MiB/s [2024-12-16T05:40:42.090Z] 7631.30 IOPS, 29.81 MiB/s [2024-12-16T05:40:42.090Z] [2024-12-16 05:40:38.240791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.240867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.240969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.241782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.241953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.241972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.242014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.242035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.242062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.242081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.242107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.242127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.242153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.242173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.242199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.831 [2024-12-16 05:40:38.242227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:01.831 [2024-12-16 05:40:38.242256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.831 [2024-12-16 05:40:38.242276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.242322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.242414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.242460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.242552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.242858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.242904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.242950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.242976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.242995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.243041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.243086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.243132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.243177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.243465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.243665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.832 [2024-12-16 05:40:38.243756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.832 [2024-12-16 05:40:38.243802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:01.832 [2024-12-16 05:40:38.243828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.243847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.243873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.243893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.243919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.243938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.243996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.244027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.244056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.244077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.244105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.244125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.245699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.245738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.245775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.245798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.245827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.245848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.245876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.245897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.245925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.245946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.245973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.246149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.246201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.833 [2024-12-16 05:40:38.246255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:01.833 [2024-12-16 05:40:38.246513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.833 [2024-12-16 05:40:38.246533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.833 7706.61 IOPS, 30.10 MiB/s [2024-12-16T05:40:42.092Z] 7732.28 IOPS, 30.20 MiB/s [2024-12-16T05:40:42.092Z] 7748.39 IOPS, 30.27 MiB/s [2024-12-16T05:40:42.092Z] Received shutdown signal, test time was about 33.128184 seconds 00:23:01.833 00:23:01.833 Latency(us) 00:23:01.833 [2024-12-16T05:40:42.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.833 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:01.833 Verification LBA range: start 0x0 length 0x4000 00:23:01.833 Nvme0n1 : 33.13 7748.79 30.27 0.00 0.00 16486.54 826.65 4026531.84 00:23:01.833 [2024-12-16T05:40:42.092Z] =================================================================================================================== 00:23:01.833 [2024-12-16T05:40:42.092Z] Total : 7748.79 30.27 0.00 0.00 16486.54 826.65 4026531.84 00:23:01.833 05:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.833 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:01.833 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:01.833 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:01.833 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.833 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.093 rmmod nvme_tcp 00:23:02.093 rmmod nvme_fabrics 00:23:02.093 rmmod nvme_keyring 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 84330 ']' 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 84330 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 84330 ']' 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 84330 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84330 00:23:02.093 killing process with pid 84330 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84330' 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 84330 00:23:02.093 05:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 84330 00:23:03.030 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.030 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:03.031 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:03.290 00:23:03.290 real 0m40.952s 00:23:03.290 user 2m10.808s 00:23:03.290 sys 0m10.484s 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:03.290 ************************************ 00:23:03.290 END TEST nvmf_host_multipath_status 00:23:03.290 ************************************ 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.290 ************************************ 00:23:03.290 START TEST nvmf_discovery_remove_ifc 00:23:03.290 ************************************ 00:23:03.290 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:03.550 * Looking for test storage... 00:23:03.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:03.550 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:03.550 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:03.550 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.551 --rc genhtml_branch_coverage=1 00:23:03.551 --rc genhtml_function_coverage=1 00:23:03.551 --rc genhtml_legend=1 00:23:03.551 --rc geninfo_all_blocks=1 00:23:03.551 --rc geninfo_unexecuted_blocks=1 00:23:03.551 00:23:03.551 ' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.551 --rc genhtml_branch_coverage=1 00:23:03.551 --rc genhtml_function_coverage=1 00:23:03.551 --rc genhtml_legend=1 00:23:03.551 --rc geninfo_all_blocks=1 00:23:03.551 --rc geninfo_unexecuted_blocks=1 00:23:03.551 00:23:03.551 ' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.551 --rc genhtml_branch_coverage=1 00:23:03.551 --rc genhtml_function_coverage=1 00:23:03.551 --rc genhtml_legend=1 00:23:03.551 --rc geninfo_all_blocks=1 00:23:03.551 --rc geninfo_unexecuted_blocks=1 00:23:03.551 00:23:03.551 ' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:03.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.551 --rc genhtml_branch_coverage=1 00:23:03.551 --rc genhtml_function_coverage=1 00:23:03.551 --rc genhtml_legend=1 00:23:03.551 --rc geninfo_all_blocks=1 00:23:03.551 --rc geninfo_unexecuted_blocks=1 00:23:03.551 00:23:03.551 ' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:03.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:03.551 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:03.552 Cannot find device "nvmf_init_br" 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:03.552 Cannot find device "nvmf_init_br2" 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:03.552 Cannot find device "nvmf_tgt_br" 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:03.552 Cannot find device "nvmf_tgt_br2" 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:03.552 Cannot find device "nvmf_init_br" 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:03.552 Cannot find device "nvmf_init_br2" 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:03.552 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:03.811 Cannot find device "nvmf_tgt_br" 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:03.811 Cannot find device "nvmf_tgt_br2" 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:03.811 Cannot find device "nvmf_br" 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:03.811 Cannot find device "nvmf_init_if" 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:03.811 Cannot find device "nvmf_init_if2" 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:03.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:03.811 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:03.811 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:03.812 05:40:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:03.812 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:03.812 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:03.812 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:03.812 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:03.812 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:03.812 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:04.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:04.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:23:04.071 00:23:04.071 --- 10.0.0.3 ping statistics --- 00:23:04.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.071 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:04.071 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:04.071 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:23:04.071 00:23:04.071 --- 10.0.0.4 ping statistics --- 00:23:04.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.071 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:04.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:04.071 00:23:04.071 --- 10.0.0.1 ping statistics --- 00:23:04.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.071 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:04.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:23:04.071 00:23:04.071 --- 10.0.0.2 ping statistics --- 00:23:04.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.071 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.071 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=85224 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 85224 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 85224 ']' 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.072 05:40:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.072 [2024-12-16 05:40:44.260476] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:04.072 [2024-12-16 05:40:44.260654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.331 [2024-12-16 05:40:44.451355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.331 [2024-12-16 05:40:44.574873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.331 [2024-12-16 05:40:44.574946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.331 [2024-12-16 05:40:44.574970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.331 [2024-12-16 05:40:44.575002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.331 [2024-12-16 05:40:44.575019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.331 [2024-12-16 05:40:44.576457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.590 [2024-12-16 05:40:44.775090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.157 [2024-12-16 05:40:45.250312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.157 [2024-12-16 05:40:45.258469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:05.157 null0 00:23:05.157 [2024-12-16 05:40:45.290392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=85256 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85256 /tmp/host.sock 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 85256 ']' 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:05.157 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.158 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:05.158 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:05.158 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.158 05:40:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.158 [2024-12-16 05:40:45.400487] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:05.158 [2024-12-16 05:40:45.400667] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85256 ] 00:23:05.416 [2024-12-16 05:40:45.576318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.676 [2024-12-16 05:40:45.699839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.244 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.244 [2024-12-16 05:40:46.491363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:06.503 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.503 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:06.503 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.503 05:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.441 [2024-12-16 05:40:47.604664] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:07.441 [2024-12-16 05:40:47.604715] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:07.441 [2024-12-16 05:40:47.604751] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:07.441 [2024-12-16 05:40:47.610736] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:07.441 [2024-12-16 05:40:47.673306] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:07.441 [2024-12-16 05:40:47.674613] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:23:07.441 [2024-12-16 05:40:47.676704] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:07.441 [2024-12-16 05:40:47.676796] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:07.441 [2024-12-16 05:40:47.676856] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:07.441 [2024-12-16 05:40:47.676881] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:07.441 [2024-12-16 05:40:47.676914] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.441 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.441 [2024-12-16 05:40:47.683951] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:07.700 05:40:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:08.637 05:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:10.014 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:10.014 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.014 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.014 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:10.014 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.014 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:10.015 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:10.015 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.015 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:10.015 05:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:10.952 05:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:11.888 05:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:11.888 05:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.888 05:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:11.888 05:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.888 05:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:11.888 05:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:11.888 05:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:11.888 05:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.888 05:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:11.888 05:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:12.825 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.084 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:13.084 05:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:13.084 [2024-12-16 05:40:53.104835] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:13.084 [2024-12-16 05:40:53.104944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.084 [2024-12-16 05:40:53.104981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.084 [2024-12-16 05:40:53.105013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.084 [2024-12-16 05:40:53.105024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.084 [2024-12-16 05:40:53.105036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.084 [2024-12-16 05:40:53.105046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.084 [2024-12-16 05:40:53.105058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.084 [2024-12-16 05:40:53.105069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.084 [2024-12-16 05:40:53.105081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.084 [2024-12-16 05:40:53.105107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.084 [2024-12-16 05:40:53.105134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:13.084 [2024-12-16 05:40:53.114826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:13.084 [2024-12-16 05:40:53.124847] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:13.084 [2024-12-16 05:40:53.124898] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:13.084 [2024-12-16 05:40:53.124909] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:13.084 [2024-12-16 05:40:53.124917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:13.084 [2024-12-16 05:40:53.124998] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:14.023 [2024-12-16 05:40:54.178732] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:14.023 [2024-12-16 05:40:54.178898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:23:14.023 [2024-12-16 05:40:54.178947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:14.023 [2024-12-16 05:40:54.179066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:14.023 [2024-12-16 05:40:54.180434] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:14.023 [2024-12-16 05:40:54.180579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:14.023 [2024-12-16 05:40:54.180657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:14.023 [2024-12-16 05:40:54.180692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:14.023 [2024-12-16 05:40:54.180722] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:14.023 [2024-12-16 05:40:54.180745] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:14.023 [2024-12-16 05:40:54.180764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:14.023 [2024-12-16 05:40:54.180799] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:14.023 [2024-12-16 05:40:54.180827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:14.023 05:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:14.964 [2024-12-16 05:40:55.180944] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:14.964 [2024-12-16 05:40:55.181006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:14.964 [2024-12-16 05:40:55.181035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:14.964 [2024-12-16 05:40:55.181064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:14.964 [2024-12-16 05:40:55.181077] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:14.964 [2024-12-16 05:40:55.181089] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:14.964 [2024-12-16 05:40:55.181098] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:14.964 [2024-12-16 05:40:55.181106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:14.964 [2024-12-16 05:40:55.181157] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:14.964 [2024-12-16 05:40:55.181208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.964 [2024-12-16 05:40:55.181228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.964 [2024-12-16 05:40:55.181266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.964 [2024-12-16 05:40:55.181294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.964 [2024-12-16 05:40:55.181307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.964 [2024-12-16 05:40:55.181319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.964 [2024-12-16 05:40:55.181331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.964 [2024-12-16 05:40:55.181343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.964 [2024-12-16 05:40:55.181355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.964 [2024-12-16 05:40:55.181381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.964 [2024-12-16 05:40:55.181394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:14.964 [2024-12-16 05:40:55.181816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:14.964 [2024-12-16 05:40:55.182849] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:14.964 [2024-12-16 05:40:55.182875] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:14.964 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:14.964 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.964 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:14.964 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.964 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:14.964 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.964 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:15.223 05:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.158 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.158 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.158 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.158 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.158 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.158 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.158 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.159 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.159 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:16.159 05:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:17.094 [2024-12-16 05:40:57.189848] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:17.094 [2024-12-16 05:40:57.189887] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:17.094 [2024-12-16 05:40:57.189933] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:17.095 [2024-12-16 05:40:57.195931] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:23:17.095 [2024-12-16 05:40:57.258553] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:23:17.095 [2024-12-16 05:40:57.259816] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:23:17.095 [2024-12-16 05:40:57.261827] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:17.095 [2024-12-16 05:40:57.261905] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:17.095 [2024-12-16 05:40:57.261957] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:17.095 [2024-12-16 05:40:57.261981] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:23:17.095 [2024-12-16 05:40:57.261996] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:17.095 [2024-12-16 05:40:57.268538] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 85256 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 85256 ']' 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 85256 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85256 00:23:17.354 killing process with pid 85256 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85256' 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 85256 00:23:17.354 05:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 85256 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.292 rmmod nvme_tcp 00:23:18.292 rmmod nvme_fabrics 00:23:18.292 rmmod nvme_keyring 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 85224 ']' 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 85224 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 85224 ']' 00:23:18.292 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 85224 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85224 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85224' 00:23:18.293 killing process with pid 85224 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 85224 00:23:18.293 05:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 85224 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.230 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:23:19.490 ************************************ 00:23:19.490 END TEST nvmf_discovery_remove_ifc 00:23:19.490 ************************************ 00:23:19.490 00:23:19.490 real 0m16.005s 00:23:19.490 user 0m26.948s 00:23:19.490 sys 0m2.557s 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.490 ************************************ 00:23:19.490 START TEST nvmf_identify_kernel_target 00:23:19.490 ************************************ 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:19.490 * Looking for test storage... 00:23:19.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.490 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.750 --rc genhtml_branch_coverage=1 00:23:19.750 --rc genhtml_function_coverage=1 00:23:19.750 --rc genhtml_legend=1 00:23:19.750 --rc geninfo_all_blocks=1 00:23:19.750 --rc geninfo_unexecuted_blocks=1 00:23:19.750 00:23:19.750 ' 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.750 --rc genhtml_branch_coverage=1 00:23:19.750 --rc genhtml_function_coverage=1 00:23:19.750 --rc genhtml_legend=1 00:23:19.750 --rc geninfo_all_blocks=1 00:23:19.750 --rc geninfo_unexecuted_blocks=1 00:23:19.750 00:23:19.750 ' 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.750 --rc genhtml_branch_coverage=1 00:23:19.750 --rc genhtml_function_coverage=1 00:23:19.750 --rc genhtml_legend=1 00:23:19.750 --rc geninfo_all_blocks=1 00:23:19.750 --rc geninfo_unexecuted_blocks=1 00:23:19.750 00:23:19.750 ' 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.750 --rc genhtml_branch_coverage=1 00:23:19.750 --rc genhtml_function_coverage=1 00:23:19.750 --rc genhtml_legend=1 00:23:19.750 --rc geninfo_all_blocks=1 00:23:19.750 --rc geninfo_unexecuted_blocks=1 00:23:19.750 00:23:19.750 ' 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:19.750 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.751 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:19.751 Cannot find device "nvmf_init_br" 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:19.751 Cannot find device "nvmf_init_br2" 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:19.751 Cannot find device "nvmf_tgt_br" 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:19.751 Cannot find device "nvmf_tgt_br2" 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:19.751 Cannot find device "nvmf_init_br" 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:19.751 Cannot find device "nvmf_init_br2" 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:19.751 Cannot find device "nvmf_tgt_br" 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:23:19.751 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:19.751 Cannot find device "nvmf_tgt_br2" 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:19.752 Cannot find device "nvmf_br" 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:19.752 Cannot find device "nvmf_init_if" 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:19.752 Cannot find device "nvmf_init_if2" 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:19.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:19.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:19.752 05:40:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:19.752 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:20.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:20.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:23:20.011 00:23:20.011 --- 10.0.0.3 ping statistics --- 00:23:20.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.011 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:20.011 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:20.011 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:23:20.011 00:23:20.011 --- 10.0.0.4 ping statistics --- 00:23:20.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.011 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:20.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:20.011 00:23:20.011 --- 10.0.0.1 ping statistics --- 00:23:20.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.011 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:20.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:23:20.011 00:23:20.011 --- 10.0.0.2 ping statistics --- 00:23:20.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.011 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.011 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:20.012 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:20.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:20.580 Waiting for block devices as requested 00:23:20.580 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:20.580 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:20.580 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:20.839 No valid GPT data, bailing 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:20.839 No valid GPT data, bailing 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:20.839 05:41:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:20.839 No valid GPT data, bailing 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:20.839 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:21.098 No valid GPT data, bailing 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -a 10.0.0.1 -t tcp -s 4420 00:23:21.098 00:23:21.098 Discovery Log Number of Records 2, Generation counter 2 00:23:21.098 =====Discovery Log Entry 0====== 00:23:21.098 trtype: tcp 00:23:21.098 adrfam: ipv4 00:23:21.098 subtype: current discovery subsystem 00:23:21.098 treq: not specified, sq flow control disable supported 00:23:21.098 portid: 1 00:23:21.098 trsvcid: 4420 00:23:21.098 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:21.098 traddr: 10.0.0.1 00:23:21.098 eflags: none 00:23:21.098 sectype: none 00:23:21.098 =====Discovery Log Entry 1====== 00:23:21.098 trtype: tcp 00:23:21.098 adrfam: ipv4 00:23:21.098 subtype: nvme subsystem 00:23:21.098 treq: not specified, sq flow control disable supported 00:23:21.098 portid: 1 00:23:21.098 trsvcid: 4420 00:23:21.098 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:21.098 traddr: 10.0.0.1 00:23:21.098 eflags: none 00:23:21.098 sectype: none 00:23:21.098 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:21.098 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:21.357 ===================================================== 00:23:21.357 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:21.357 ===================================================== 00:23:21.357 Controller Capabilities/Features 00:23:21.357 ================================ 00:23:21.357 Vendor ID: 0000 00:23:21.357 Subsystem Vendor ID: 0000 00:23:21.357 Serial Number: 81e923109fb10d89984b 00:23:21.357 Model Number: Linux 00:23:21.357 Firmware Version: 6.8.9-20 00:23:21.357 Recommended Arb Burst: 0 00:23:21.357 IEEE OUI Identifier: 00 00 00 00:23:21.357 Multi-path I/O 00:23:21.357 May have multiple subsystem ports: No 00:23:21.357 May have multiple controllers: No 00:23:21.357 Associated with SR-IOV VF: No 00:23:21.357 Max Data Transfer Size: Unlimited 00:23:21.357 Max Number of Namespaces: 0 00:23:21.357 Max Number of I/O Queues: 1024 00:23:21.357 NVMe Specification Version (VS): 1.3 00:23:21.357 NVMe Specification Version (Identify): 1.3 00:23:21.357 Maximum Queue Entries: 1024 00:23:21.357 Contiguous Queues Required: No 00:23:21.357 Arbitration Mechanisms Supported 00:23:21.357 Weighted Round Robin: Not Supported 00:23:21.357 Vendor Specific: Not Supported 00:23:21.357 Reset Timeout: 7500 ms 00:23:21.357 Doorbell Stride: 4 bytes 00:23:21.357 NVM Subsystem Reset: Not Supported 00:23:21.357 Command Sets Supported 00:23:21.357 NVM Command Set: Supported 00:23:21.357 Boot Partition: Not Supported 00:23:21.357 Memory Page Size Minimum: 4096 bytes 00:23:21.357 Memory Page Size Maximum: 4096 bytes 00:23:21.357 Persistent Memory Region: Not Supported 00:23:21.357 Optional Asynchronous Events Supported 00:23:21.357 Namespace Attribute Notices: Not Supported 00:23:21.357 Firmware Activation Notices: Not Supported 00:23:21.357 ANA Change Notices: Not Supported 00:23:21.357 PLE Aggregate Log Change Notices: Not Supported 00:23:21.357 LBA Status Info Alert Notices: Not Supported 00:23:21.357 EGE Aggregate Log Change Notices: Not Supported 00:23:21.357 Normal NVM Subsystem Shutdown event: Not Supported 00:23:21.357 Zone Descriptor Change Notices: Not Supported 00:23:21.357 Discovery Log Change Notices: Supported 00:23:21.357 Controller Attributes 00:23:21.357 128-bit Host Identifier: Not Supported 00:23:21.357 Non-Operational Permissive Mode: Not Supported 00:23:21.357 NVM Sets: Not Supported 00:23:21.357 Read Recovery Levels: Not Supported 00:23:21.357 Endurance Groups: Not Supported 00:23:21.357 Predictable Latency Mode: Not Supported 00:23:21.357 Traffic Based Keep ALive: Not Supported 00:23:21.357 Namespace Granularity: Not Supported 00:23:21.357 SQ Associations: Not Supported 00:23:21.357 UUID List: Not Supported 00:23:21.357 Multi-Domain Subsystem: Not Supported 00:23:21.357 Fixed Capacity Management: Not Supported 00:23:21.357 Variable Capacity Management: Not Supported 00:23:21.357 Delete Endurance Group: Not Supported 00:23:21.357 Delete NVM Set: Not Supported 00:23:21.357 Extended LBA Formats Supported: Not Supported 00:23:21.357 Flexible Data Placement Supported: Not Supported 00:23:21.357 00:23:21.357 Controller Memory Buffer Support 00:23:21.357 ================================ 00:23:21.357 Supported: No 00:23:21.357 00:23:21.357 Persistent Memory Region Support 00:23:21.357 ================================ 00:23:21.357 Supported: No 00:23:21.357 00:23:21.357 Admin Command Set Attributes 00:23:21.357 ============================ 00:23:21.357 Security Send/Receive: Not Supported 00:23:21.357 Format NVM: Not Supported 00:23:21.357 Firmware Activate/Download: Not Supported 00:23:21.357 Namespace Management: Not Supported 00:23:21.357 Device Self-Test: Not Supported 00:23:21.357 Directives: Not Supported 00:23:21.357 NVMe-MI: Not Supported 00:23:21.357 Virtualization Management: Not Supported 00:23:21.357 Doorbell Buffer Config: Not Supported 00:23:21.357 Get LBA Status Capability: Not Supported 00:23:21.357 Command & Feature Lockdown Capability: Not Supported 00:23:21.357 Abort Command Limit: 1 00:23:21.357 Async Event Request Limit: 1 00:23:21.357 Number of Firmware Slots: N/A 00:23:21.357 Firmware Slot 1 Read-Only: N/A 00:23:21.357 Firmware Activation Without Reset: N/A 00:23:21.357 Multiple Update Detection Support: N/A 00:23:21.357 Firmware Update Granularity: No Information Provided 00:23:21.357 Per-Namespace SMART Log: No 00:23:21.357 Asymmetric Namespace Access Log Page: Not Supported 00:23:21.357 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:21.357 Command Effects Log Page: Not Supported 00:23:21.357 Get Log Page Extended Data: Supported 00:23:21.357 Telemetry Log Pages: Not Supported 00:23:21.357 Persistent Event Log Pages: Not Supported 00:23:21.357 Supported Log Pages Log Page: May Support 00:23:21.357 Commands Supported & Effects Log Page: Not Supported 00:23:21.357 Feature Identifiers & Effects Log Page:May Support 00:23:21.357 NVMe-MI Commands & Effects Log Page: May Support 00:23:21.357 Data Area 4 for Telemetry Log: Not Supported 00:23:21.357 Error Log Page Entries Supported: 1 00:23:21.357 Keep Alive: Not Supported 00:23:21.357 00:23:21.357 NVM Command Set Attributes 00:23:21.357 ========================== 00:23:21.357 Submission Queue Entry Size 00:23:21.357 Max: 1 00:23:21.357 Min: 1 00:23:21.357 Completion Queue Entry Size 00:23:21.357 Max: 1 00:23:21.357 Min: 1 00:23:21.357 Number of Namespaces: 0 00:23:21.357 Compare Command: Not Supported 00:23:21.357 Write Uncorrectable Command: Not Supported 00:23:21.357 Dataset Management Command: Not Supported 00:23:21.357 Write Zeroes Command: Not Supported 00:23:21.357 Set Features Save Field: Not Supported 00:23:21.357 Reservations: Not Supported 00:23:21.357 Timestamp: Not Supported 00:23:21.357 Copy: Not Supported 00:23:21.357 Volatile Write Cache: Not Present 00:23:21.357 Atomic Write Unit (Normal): 1 00:23:21.358 Atomic Write Unit (PFail): 1 00:23:21.358 Atomic Compare & Write Unit: 1 00:23:21.358 Fused Compare & Write: Not Supported 00:23:21.358 Scatter-Gather List 00:23:21.358 SGL Command Set: Supported 00:23:21.358 SGL Keyed: Not Supported 00:23:21.358 SGL Bit Bucket Descriptor: Not Supported 00:23:21.358 SGL Metadata Pointer: Not Supported 00:23:21.358 Oversized SGL: Not Supported 00:23:21.358 SGL Metadata Address: Not Supported 00:23:21.358 SGL Offset: Supported 00:23:21.358 Transport SGL Data Block: Not Supported 00:23:21.358 Replay Protected Memory Block: Not Supported 00:23:21.358 00:23:21.358 Firmware Slot Information 00:23:21.358 ========================= 00:23:21.358 Active slot: 0 00:23:21.358 00:23:21.358 00:23:21.358 Error Log 00:23:21.358 ========= 00:23:21.358 00:23:21.358 Active Namespaces 00:23:21.358 ================= 00:23:21.358 Discovery Log Page 00:23:21.358 ================== 00:23:21.358 Generation Counter: 2 00:23:21.358 Number of Records: 2 00:23:21.358 Record Format: 0 00:23:21.358 00:23:21.358 Discovery Log Entry 0 00:23:21.358 ---------------------- 00:23:21.358 Transport Type: 3 (TCP) 00:23:21.358 Address Family: 1 (IPv4) 00:23:21.358 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:21.358 Entry Flags: 00:23:21.358 Duplicate Returned Information: 0 00:23:21.358 Explicit Persistent Connection Support for Discovery: 0 00:23:21.358 Transport Requirements: 00:23:21.358 Secure Channel: Not Specified 00:23:21.358 Port ID: 1 (0x0001) 00:23:21.358 Controller ID: 65535 (0xffff) 00:23:21.358 Admin Max SQ Size: 32 00:23:21.358 Transport Service Identifier: 4420 00:23:21.358 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:21.358 Transport Address: 10.0.0.1 00:23:21.358 Discovery Log Entry 1 00:23:21.358 ---------------------- 00:23:21.358 Transport Type: 3 (TCP) 00:23:21.358 Address Family: 1 (IPv4) 00:23:21.358 Subsystem Type: 2 (NVM Subsystem) 00:23:21.358 Entry Flags: 00:23:21.358 Duplicate Returned Information: 0 00:23:21.358 Explicit Persistent Connection Support for Discovery: 0 00:23:21.358 Transport Requirements: 00:23:21.358 Secure Channel: Not Specified 00:23:21.358 Port ID: 1 (0x0001) 00:23:21.358 Controller ID: 65535 (0xffff) 00:23:21.358 Admin Max SQ Size: 32 00:23:21.358 Transport Service Identifier: 4420 00:23:21.358 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:21.358 Transport Address: 10.0.0.1 00:23:21.358 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:21.618 get_feature(0x01) failed 00:23:21.618 get_feature(0x02) failed 00:23:21.618 get_feature(0x04) failed 00:23:21.618 ===================================================== 00:23:21.618 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:21.618 ===================================================== 00:23:21.618 Controller Capabilities/Features 00:23:21.618 ================================ 00:23:21.618 Vendor ID: 0000 00:23:21.618 Subsystem Vendor ID: 0000 00:23:21.618 Serial Number: 32dbf1c9743647f9f328 00:23:21.618 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:21.618 Firmware Version: 6.8.9-20 00:23:21.618 Recommended Arb Burst: 6 00:23:21.618 IEEE OUI Identifier: 00 00 00 00:23:21.618 Multi-path I/O 00:23:21.618 May have multiple subsystem ports: Yes 00:23:21.618 May have multiple controllers: Yes 00:23:21.618 Associated with SR-IOV VF: No 00:23:21.618 Max Data Transfer Size: Unlimited 00:23:21.618 Max Number of Namespaces: 1024 00:23:21.618 Max Number of I/O Queues: 128 00:23:21.618 NVMe Specification Version (VS): 1.3 00:23:21.618 NVMe Specification Version (Identify): 1.3 00:23:21.618 Maximum Queue Entries: 1024 00:23:21.618 Contiguous Queues Required: No 00:23:21.618 Arbitration Mechanisms Supported 00:23:21.618 Weighted Round Robin: Not Supported 00:23:21.618 Vendor Specific: Not Supported 00:23:21.618 Reset Timeout: 7500 ms 00:23:21.618 Doorbell Stride: 4 bytes 00:23:21.618 NVM Subsystem Reset: Not Supported 00:23:21.618 Command Sets Supported 00:23:21.618 NVM Command Set: Supported 00:23:21.618 Boot Partition: Not Supported 00:23:21.618 Memory Page Size Minimum: 4096 bytes 00:23:21.618 Memory Page Size Maximum: 4096 bytes 00:23:21.618 Persistent Memory Region: Not Supported 00:23:21.618 Optional Asynchronous Events Supported 00:23:21.618 Namespace Attribute Notices: Supported 00:23:21.618 Firmware Activation Notices: Not Supported 00:23:21.618 ANA Change Notices: Supported 00:23:21.618 PLE Aggregate Log Change Notices: Not Supported 00:23:21.618 LBA Status Info Alert Notices: Not Supported 00:23:21.618 EGE Aggregate Log Change Notices: Not Supported 00:23:21.618 Normal NVM Subsystem Shutdown event: Not Supported 00:23:21.618 Zone Descriptor Change Notices: Not Supported 00:23:21.618 Discovery Log Change Notices: Not Supported 00:23:21.618 Controller Attributes 00:23:21.618 128-bit Host Identifier: Supported 00:23:21.618 Non-Operational Permissive Mode: Not Supported 00:23:21.618 NVM Sets: Not Supported 00:23:21.618 Read Recovery Levels: Not Supported 00:23:21.618 Endurance Groups: Not Supported 00:23:21.618 Predictable Latency Mode: Not Supported 00:23:21.618 Traffic Based Keep ALive: Supported 00:23:21.618 Namespace Granularity: Not Supported 00:23:21.618 SQ Associations: Not Supported 00:23:21.618 UUID List: Not Supported 00:23:21.618 Multi-Domain Subsystem: Not Supported 00:23:21.618 Fixed Capacity Management: Not Supported 00:23:21.618 Variable Capacity Management: Not Supported 00:23:21.618 Delete Endurance Group: Not Supported 00:23:21.618 Delete NVM Set: Not Supported 00:23:21.618 Extended LBA Formats Supported: Not Supported 00:23:21.618 Flexible Data Placement Supported: Not Supported 00:23:21.618 00:23:21.618 Controller Memory Buffer Support 00:23:21.618 ================================ 00:23:21.618 Supported: No 00:23:21.618 00:23:21.618 Persistent Memory Region Support 00:23:21.618 ================================ 00:23:21.618 Supported: No 00:23:21.618 00:23:21.618 Admin Command Set Attributes 00:23:21.618 ============================ 00:23:21.618 Security Send/Receive: Not Supported 00:23:21.618 Format NVM: Not Supported 00:23:21.618 Firmware Activate/Download: Not Supported 00:23:21.618 Namespace Management: Not Supported 00:23:21.618 Device Self-Test: Not Supported 00:23:21.618 Directives: Not Supported 00:23:21.618 NVMe-MI: Not Supported 00:23:21.618 Virtualization Management: Not Supported 00:23:21.618 Doorbell Buffer Config: Not Supported 00:23:21.618 Get LBA Status Capability: Not Supported 00:23:21.618 Command & Feature Lockdown Capability: Not Supported 00:23:21.618 Abort Command Limit: 4 00:23:21.618 Async Event Request Limit: 4 00:23:21.618 Number of Firmware Slots: N/A 00:23:21.618 Firmware Slot 1 Read-Only: N/A 00:23:21.618 Firmware Activation Without Reset: N/A 00:23:21.618 Multiple Update Detection Support: N/A 00:23:21.618 Firmware Update Granularity: No Information Provided 00:23:21.618 Per-Namespace SMART Log: Yes 00:23:21.618 Asymmetric Namespace Access Log Page: Supported 00:23:21.618 ANA Transition Time : 10 sec 00:23:21.618 00:23:21.618 Asymmetric Namespace Access Capabilities 00:23:21.618 ANA Optimized State : Supported 00:23:21.618 ANA Non-Optimized State : Supported 00:23:21.618 ANA Inaccessible State : Supported 00:23:21.618 ANA Persistent Loss State : Supported 00:23:21.618 ANA Change State : Supported 00:23:21.618 ANAGRPID is not changed : No 00:23:21.618 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:21.618 00:23:21.618 ANA Group Identifier Maximum : 128 00:23:21.618 Number of ANA Group Identifiers : 128 00:23:21.618 Max Number of Allowed Namespaces : 1024 00:23:21.618 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:21.618 Command Effects Log Page: Supported 00:23:21.618 Get Log Page Extended Data: Supported 00:23:21.618 Telemetry Log Pages: Not Supported 00:23:21.618 Persistent Event Log Pages: Not Supported 00:23:21.618 Supported Log Pages Log Page: May Support 00:23:21.618 Commands Supported & Effects Log Page: Not Supported 00:23:21.618 Feature Identifiers & Effects Log Page:May Support 00:23:21.618 NVMe-MI Commands & Effects Log Page: May Support 00:23:21.618 Data Area 4 for Telemetry Log: Not Supported 00:23:21.618 Error Log Page Entries Supported: 128 00:23:21.618 Keep Alive: Supported 00:23:21.618 Keep Alive Granularity: 1000 ms 00:23:21.618 00:23:21.618 NVM Command Set Attributes 00:23:21.618 ========================== 00:23:21.618 Submission Queue Entry Size 00:23:21.618 Max: 64 00:23:21.618 Min: 64 00:23:21.618 Completion Queue Entry Size 00:23:21.618 Max: 16 00:23:21.618 Min: 16 00:23:21.618 Number of Namespaces: 1024 00:23:21.618 Compare Command: Not Supported 00:23:21.618 Write Uncorrectable Command: Not Supported 00:23:21.618 Dataset Management Command: Supported 00:23:21.618 Write Zeroes Command: Supported 00:23:21.618 Set Features Save Field: Not Supported 00:23:21.618 Reservations: Not Supported 00:23:21.618 Timestamp: Not Supported 00:23:21.618 Copy: Not Supported 00:23:21.618 Volatile Write Cache: Present 00:23:21.618 Atomic Write Unit (Normal): 1 00:23:21.618 Atomic Write Unit (PFail): 1 00:23:21.618 Atomic Compare & Write Unit: 1 00:23:21.618 Fused Compare & Write: Not Supported 00:23:21.618 Scatter-Gather List 00:23:21.618 SGL Command Set: Supported 00:23:21.618 SGL Keyed: Not Supported 00:23:21.618 SGL Bit Bucket Descriptor: Not Supported 00:23:21.618 SGL Metadata Pointer: Not Supported 00:23:21.618 Oversized SGL: Not Supported 00:23:21.618 SGL Metadata Address: Not Supported 00:23:21.618 SGL Offset: Supported 00:23:21.618 Transport SGL Data Block: Not Supported 00:23:21.618 Replay Protected Memory Block: Not Supported 00:23:21.618 00:23:21.618 Firmware Slot Information 00:23:21.618 ========================= 00:23:21.618 Active slot: 0 00:23:21.618 00:23:21.618 Asymmetric Namespace Access 00:23:21.618 =========================== 00:23:21.618 Change Count : 0 00:23:21.618 Number of ANA Group Descriptors : 1 00:23:21.618 ANA Group Descriptor : 0 00:23:21.618 ANA Group ID : 1 00:23:21.618 Number of NSID Values : 1 00:23:21.618 Change Count : 0 00:23:21.618 ANA State : 1 00:23:21.618 Namespace Identifier : 1 00:23:21.618 00:23:21.618 Commands Supported and Effects 00:23:21.618 ============================== 00:23:21.618 Admin Commands 00:23:21.618 -------------- 00:23:21.618 Get Log Page (02h): Supported 00:23:21.618 Identify (06h): Supported 00:23:21.618 Abort (08h): Supported 00:23:21.618 Set Features (09h): Supported 00:23:21.618 Get Features (0Ah): Supported 00:23:21.618 Asynchronous Event Request (0Ch): Supported 00:23:21.618 Keep Alive (18h): Supported 00:23:21.618 I/O Commands 00:23:21.618 ------------ 00:23:21.618 Flush (00h): Supported 00:23:21.618 Write (01h): Supported LBA-Change 00:23:21.618 Read (02h): Supported 00:23:21.618 Write Zeroes (08h): Supported LBA-Change 00:23:21.618 Dataset Management (09h): Supported 00:23:21.619 00:23:21.619 Error Log 00:23:21.619 ========= 00:23:21.619 Entry: 0 00:23:21.619 Error Count: 0x3 00:23:21.619 Submission Queue Id: 0x0 00:23:21.619 Command Id: 0x5 00:23:21.619 Phase Bit: 0 00:23:21.619 Status Code: 0x2 00:23:21.619 Status Code Type: 0x0 00:23:21.619 Do Not Retry: 1 00:23:21.619 Error Location: 0x28 00:23:21.619 LBA: 0x0 00:23:21.619 Namespace: 0x0 00:23:21.619 Vendor Log Page: 0x0 00:23:21.619 ----------- 00:23:21.619 Entry: 1 00:23:21.619 Error Count: 0x2 00:23:21.619 Submission Queue Id: 0x0 00:23:21.619 Command Id: 0x5 00:23:21.619 Phase Bit: 0 00:23:21.619 Status Code: 0x2 00:23:21.619 Status Code Type: 0x0 00:23:21.619 Do Not Retry: 1 00:23:21.619 Error Location: 0x28 00:23:21.619 LBA: 0x0 00:23:21.619 Namespace: 0x0 00:23:21.619 Vendor Log Page: 0x0 00:23:21.619 ----------- 00:23:21.619 Entry: 2 00:23:21.619 Error Count: 0x1 00:23:21.619 Submission Queue Id: 0x0 00:23:21.619 Command Id: 0x4 00:23:21.619 Phase Bit: 0 00:23:21.619 Status Code: 0x2 00:23:21.619 Status Code Type: 0x0 00:23:21.619 Do Not Retry: 1 00:23:21.619 Error Location: 0x28 00:23:21.619 LBA: 0x0 00:23:21.619 Namespace: 0x0 00:23:21.619 Vendor Log Page: 0x0 00:23:21.619 00:23:21.619 Number of Queues 00:23:21.619 ================ 00:23:21.619 Number of I/O Submission Queues: 128 00:23:21.619 Number of I/O Completion Queues: 128 00:23:21.619 00:23:21.619 ZNS Specific Controller Data 00:23:21.619 ============================ 00:23:21.619 Zone Append Size Limit: 0 00:23:21.619 00:23:21.619 00:23:21.619 Active Namespaces 00:23:21.619 ================= 00:23:21.619 get_feature(0x05) failed 00:23:21.619 Namespace ID:1 00:23:21.619 Command Set Identifier: NVM (00h) 00:23:21.619 Deallocate: Supported 00:23:21.619 Deallocated/Unwritten Error: Not Supported 00:23:21.619 Deallocated Read Value: Unknown 00:23:21.619 Deallocate in Write Zeroes: Not Supported 00:23:21.619 Deallocated Guard Field: 0xFFFF 00:23:21.619 Flush: Supported 00:23:21.619 Reservation: Not Supported 00:23:21.619 Namespace Sharing Capabilities: Multiple Controllers 00:23:21.619 Size (in LBAs): 1310720 (5GiB) 00:23:21.619 Capacity (in LBAs): 1310720 (5GiB) 00:23:21.619 Utilization (in LBAs): 1310720 (5GiB) 00:23:21.619 UUID: 2e2c324a-539e-4dc5-b7fe-4e6f04c43379 00:23:21.619 Thin Provisioning: Not Supported 00:23:21.619 Per-NS Atomic Units: Yes 00:23:21.619 Atomic Boundary Size (Normal): 0 00:23:21.619 Atomic Boundary Size (PFail): 0 00:23:21.619 Atomic Boundary Offset: 0 00:23:21.619 NGUID/EUI64 Never Reused: No 00:23:21.619 ANA group ID: 1 00:23:21.619 Namespace Write Protected: No 00:23:21.619 Number of LBA Formats: 1 00:23:21.619 Current LBA Format: LBA Format #00 00:23:21.619 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:21.619 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.619 rmmod nvme_tcp 00:23:21.619 rmmod nvme_fabrics 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.619 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.878 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.878 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:21.878 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:21.878 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:21.879 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:21.879 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.879 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:21.879 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:21.879 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:21.879 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:21.879 05:41:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:21.879 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:22.137 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:22.137 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:22.137 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:22.137 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:22.137 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:22.137 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:22.137 05:41:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:22.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:22.705 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:22.964 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:22.964 ************************************ 00:23:22.964 END TEST nvmf_identify_kernel_target 00:23:22.964 ************************************ 00:23:22.964 00:23:22.964 real 0m3.464s 00:23:22.964 user 0m1.251s 00:23:22.964 sys 0m1.588s 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.964 ************************************ 00:23:22.964 START TEST nvmf_auth_host 00:23:22.964 ************************************ 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.964 * Looking for test storage... 00:23:22.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:22.964 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.224 --rc genhtml_branch_coverage=1 00:23:23.224 --rc genhtml_function_coverage=1 00:23:23.224 --rc genhtml_legend=1 00:23:23.224 --rc geninfo_all_blocks=1 00:23:23.224 --rc geninfo_unexecuted_blocks=1 00:23:23.224 00:23:23.224 ' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.224 --rc genhtml_branch_coverage=1 00:23:23.224 --rc genhtml_function_coverage=1 00:23:23.224 --rc genhtml_legend=1 00:23:23.224 --rc geninfo_all_blocks=1 00:23:23.224 --rc geninfo_unexecuted_blocks=1 00:23:23.224 00:23:23.224 ' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.224 --rc genhtml_branch_coverage=1 00:23:23.224 --rc genhtml_function_coverage=1 00:23:23.224 --rc genhtml_legend=1 00:23:23.224 --rc geninfo_all_blocks=1 00:23:23.224 --rc geninfo_unexecuted_blocks=1 00:23:23.224 00:23:23.224 ' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:23.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.224 --rc genhtml_branch_coverage=1 00:23:23.224 --rc genhtml_function_coverage=1 00:23:23.224 --rc genhtml_legend=1 00:23:23.224 --rc geninfo_all_blocks=1 00:23:23.224 --rc geninfo_unexecuted_blocks=1 00:23:23.224 00:23:23.224 ' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.224 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:23.225 Cannot find device "nvmf_init_br" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:23.225 Cannot find device "nvmf_init_br2" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:23.225 Cannot find device "nvmf_tgt_br" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:23.225 Cannot find device "nvmf_tgt_br2" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:23.225 Cannot find device "nvmf_init_br" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:23.225 Cannot find device "nvmf_init_br2" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:23.225 Cannot find device "nvmf_tgt_br" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:23.225 Cannot find device "nvmf_tgt_br2" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:23.225 Cannot find device "nvmf_br" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:23.225 Cannot find device "nvmf_init_if" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:23.225 Cannot find device "nvmf_init_if2" 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:23.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:23.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:23.225 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:23.484 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:23.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:23.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:23:23.485 00:23:23.485 --- 10.0.0.3 ping statistics --- 00:23:23.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.485 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:23.485 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:23.485 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:23:23.485 00:23:23.485 --- 10.0.0.4 ping statistics --- 00:23:23.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.485 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:23.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:23.485 00:23:23.485 --- 10.0.0.1 ping statistics --- 00:23:23.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.485 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:23.485 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:23.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:23:23.485 00:23:23.485 --- 10.0.0.2 ping statistics --- 00:23:23.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.485 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=86291 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 86291 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 86291 ']' 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.744 05:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=94cfaf76ff6d126d7fc463e7c0140aa8 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pNu 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 94cfaf76ff6d126d7fc463e7c0140aa8 0 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 94cfaf76ff6d126d7fc463e7c0140aa8 0 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=94cfaf76ff6d126d7fc463e7c0140aa8 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:24.681 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pNu 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pNu 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pNu 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a732144c045d4de731a8c8d625538a897af0eab796cd89fcc90634f7f494a5e1 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Gos 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a732144c045d4de731a8c8d625538a897af0eab796cd89fcc90634f7f494a5e1 3 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a732144c045d4de731a8c8d625538a897af0eab796cd89fcc90634f7f494a5e1 3 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a732144c045d4de731a8c8d625538a897af0eab796cd89fcc90634f7f494a5e1 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:24.940 05:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Gos 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Gos 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Gos 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9db7ccc5df2e448367e863d5cd1cac042e987b84b1cf85ca 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5zG 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9db7ccc5df2e448367e863d5cd1cac042e987b84b1cf85ca 0 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9db7ccc5df2e448367e863d5cd1cac042e987b84b1cf85ca 0 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9db7ccc5df2e448367e863d5cd1cac042e987b84b1cf85ca 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:24.940 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5zG 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5zG 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5zG 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b15ebe5b935c4e0e8b2bc1847a2167d12c05f94ea4bb0d12 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tgh 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b15ebe5b935c4e0e8b2bc1847a2167d12c05f94ea4bb0d12 2 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b15ebe5b935c4e0e8b2bc1847a2167d12c05f94ea4bb0d12 2 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b15ebe5b935c4e0e8b2bc1847a2167d12c05f94ea4bb0d12 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tgh 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tgh 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tgh 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=37340175545d20ce0b83c586a396956f 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RdB 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 37340175545d20ce0b83c586a396956f 1 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 37340175545d20ce0b83c586a396956f 1 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=37340175545d20ce0b83c586a396956f 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:24.941 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RdB 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RdB 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RdB 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc47d8c2fa23dc270884262a4676f610 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ibs 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc47d8c2fa23dc270884262a4676f610 1 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc47d8c2fa23dc270884262a4676f610 1 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc47d8c2fa23dc270884262a4676f610 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ibs 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ibs 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Ibs 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=face96842ade070f35af167d0030888018d312166edf7536 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1w7 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key face96842ade070f35af167d0030888018d312166edf7536 2 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 face96842ade070f35af167d0030888018d312166edf7536 2 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=face96842ade070f35af167d0030888018d312166edf7536 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1w7 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1w7 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1w7 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=41aab7dd6941595a44fbecf69060ca0c 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.B8x 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 41aab7dd6941595a44fbecf69060ca0c 0 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 41aab7dd6941595a44fbecf69060ca0c 0 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=41aab7dd6941595a44fbecf69060ca0c 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.B8x 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.B8x 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.B8x 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:25.199 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d74e4c40009b2760aa44a768ab60f40a0e85a09178a0cc6b6b9c5388589cdb9e 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jE9 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d74e4c40009b2760aa44a768ab60f40a0e85a09178a0cc6b6b9c5388589cdb9e 3 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d74e4c40009b2760aa44a768ab60f40a0e85a09178a0cc6b6b9c5388589cdb9e 3 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d74e4c40009b2760aa44a768ab60f40a0e85a09178a0cc6b6b9c5388589cdb9e 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:25.200 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jE9 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jE9 00:23:25.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jE9 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 86291 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 86291 ']' 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.457 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pNu 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Gos ]] 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gos 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.458 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5zG 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tgh ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tgh 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RdB 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Ibs ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ibs 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1w7 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.B8x ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.B8x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jE9 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:25.716 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:25.717 05:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:25.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:25.975 Waiting for block devices as requested 00:23:25.975 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:26.234 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:26.801 No valid GPT data, bailing 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:26.801 05:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:26.801 No valid GPT data, bailing 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:26.801 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:27.059 No valid GPT data, bailing 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:27.059 No valid GPT data, bailing 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:27.059 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -a 10.0.0.1 -t tcp -s 4420 00:23:27.060 00:23:27.060 Discovery Log Number of Records 2, Generation counter 2 00:23:27.060 =====Discovery Log Entry 0====== 00:23:27.060 trtype: tcp 00:23:27.060 adrfam: ipv4 00:23:27.060 subtype: current discovery subsystem 00:23:27.060 treq: not specified, sq flow control disable supported 00:23:27.060 portid: 1 00:23:27.060 trsvcid: 4420 00:23:27.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:27.060 traddr: 10.0.0.1 00:23:27.060 eflags: none 00:23:27.060 sectype: none 00:23:27.060 =====Discovery Log Entry 1====== 00:23:27.060 trtype: tcp 00:23:27.060 adrfam: ipv4 00:23:27.060 subtype: nvme subsystem 00:23:27.060 treq: not specified, sq flow control disable supported 00:23:27.060 portid: 1 00:23:27.060 trsvcid: 4420 00:23:27.060 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:27.060 traddr: 10.0.0.1 00:23:27.060 eflags: none 00:23:27.060 sectype: none 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.060 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.319 nvme0n1 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.319 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.578 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.578 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:27.578 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.578 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.578 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.578 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.579 nvme0n1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.579 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 nvme0n1 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.838 05:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 nvme0n1 00:23:27.838 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.838 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.838 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.838 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.838 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.097 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 nvme0n1 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.098 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.357 nvme0n1 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.357 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.616 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.875 nvme0n1 00:23:28.875 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.875 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.875 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.875 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.875 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.875 05:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.875 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.134 nvme0n1 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.134 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.135 nvme0n1 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.135 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.394 nvme0n1 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.394 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.654 nvme0n1 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.654 05:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.221 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:30.221 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:30.221 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:30.221 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:30.221 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.221 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.221 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.222 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.481 nvme0n1 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.481 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.740 nvme0n1 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.740 05:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.998 nvme0n1 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.998 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.256 nvme0n1 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.256 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.515 nvme0n1 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.515 05:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.438 nvme0n1 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.438 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.439 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.710 nvme0n1 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.710 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.969 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.970 05:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.970 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.970 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.970 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.970 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.229 nvme0n1 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.229 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.230 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.230 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.230 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.230 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.230 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.230 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.489 nvme0n1 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.489 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.748 05:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.006 nvme0n1 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.006 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.007 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.573 nvme0n1 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.573 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.574 05:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.142 nvme0n1 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.142 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.710 nvme0n1 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.710 05:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.279 nvme0n1 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.279 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.847 nvme0n1 00:23:37.847 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.847 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.847 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.847 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.847 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.847 05:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.847 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.848 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.107 nvme0n1 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.107 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.108 nvme0n1 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.108 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.367 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.368 nvme0n1 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.368 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.627 nvme0n1 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.627 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.628 nvme0n1 00:23:38.628 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.887 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.888 05:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.888 nvme0n1 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.888 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.147 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.148 nvme0n1 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.148 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.407 nvme0n1 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:39.407 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.408 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.667 nvme0n1 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.667 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.668 nvme0n1 00:23:39.668 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:39.927 05:41:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.927 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.186 nvme0n1 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:40.186 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.187 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.446 nvme0n1 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.446 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.706 nvme0n1 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.706 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.969 nvme0n1 00:23:40.970 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.970 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.970 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.970 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.970 05:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.970 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.231 nvme0n1 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.231 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.490 nvme0n1 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:41.490 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.491 05:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.058 nvme0n1 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.058 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.317 nvme0n1 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:42.317 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.318 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.577 nvme0n1 00:23:42.577 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.577 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.577 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.577 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.577 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.836 05:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.095 nvme0n1 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.095 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.096 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 nvme0n1 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.664 05:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.232 nvme0n1 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.232 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.491 05:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.750 nvme0n1 00:23:44.750 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:45.014 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.015 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.584 nvme0n1 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.584 05:41:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.152 nvme0n1 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.152 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.153 nvme0n1 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.153 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.412 nvme0n1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.412 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 nvme0n1 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 nvme0n1 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.672 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.932 05:41:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.932 nvme0n1 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:46.932 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.933 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.192 nvme0n1 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.192 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.193 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.450 nvme0n1 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.450 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.451 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.709 nvme0n1 00:23:47.709 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.709 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.709 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.709 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.710 nvme0n1 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.710 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.969 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.970 05:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.970 nvme0n1 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.970 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 nvme0n1 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.230 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.489 nvme0n1 00:23:48.489 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.489 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.489 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.490 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.749 nvme0n1 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.749 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.750 05:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.750 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:48.750 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.750 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.009 nvme0n1 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.009 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.269 nvme0n1 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.269 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.837 nvme0n1 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:49.837 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.838 05:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.102 nvme0n1 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:50.102 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.103 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.361 nvme0n1 00:23:50.361 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.361 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.361 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.361 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.361 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:50.620 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.621 05:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.880 nvme0n1 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.880 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.462 nvme0n1 00:23:51.462 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.462 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.462 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.462 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTRjZmFmNzZmZjZkMTI2ZDdmYzQ2M2U3YzAxNDBhYTjVQssb: 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTczMjE0NGMwNDVkNGRlNzMxYThjOGQ2MjU1MzhhODk3YWYwZWFiNzk2Y2Q4OWZjYzkwNjM0ZjdmNDk0YTVlMcpIOIs=: 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.463 05:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.043 nvme0n1 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.043 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.044 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.612 nvme0n1 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.612 05:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.181 nvme0n1 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmFjZTk2ODQyYWRlMDcwZjM1YWYxNjdkMDAzMDg4ODAxOGQzMTIxNjZlZGY3NTM2kA0lYg==: 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDFhYWI3ZGQ2OTQxNTk1YTQ0ZmJlY2Y2OTA2MGNhMGNyi9My: 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.181 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.748 nvme0n1 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDc0ZTRjNDAwMDliMjc2MGFhNDRhNzY4YWI2MGY0MGEwZTg1YTA5MTc4YTBjYzZiNmI5YzUzODg1ODljZGI5ZfjjufA=: 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.748 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.749 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.749 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.749 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.749 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.749 05:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.315 nvme0n1 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.315 request: 00:23:54.315 { 00:23:54.315 "name": "nvme0", 00:23:54.315 "trtype": "tcp", 00:23:54.315 "traddr": "10.0.0.1", 00:23:54.315 "adrfam": "ipv4", 00:23:54.315 "trsvcid": "4420", 00:23:54.315 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:54.315 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:54.315 "prchk_reftag": false, 00:23:54.315 "prchk_guard": false, 00:23:54.315 "hdgst": false, 00:23:54.315 "ddgst": false, 00:23:54.315 "allow_unrecognized_csi": false, 00:23:54.315 "method": "bdev_nvme_attach_controller", 00:23:54.315 "req_id": 1 00:23:54.315 } 00:23:54.315 Got JSON-RPC error response 00:23:54.315 response: 00:23:54.315 { 00:23:54.315 "code": -5, 00:23:54.315 "message": "Input/output error" 00:23:54.315 } 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.315 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.574 request: 00:23:54.574 { 00:23:54.574 "name": "nvme0", 00:23:54.574 "trtype": "tcp", 00:23:54.574 "traddr": "10.0.0.1", 00:23:54.574 "adrfam": "ipv4", 00:23:54.574 "trsvcid": "4420", 00:23:54.574 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:54.574 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:54.574 "prchk_reftag": false, 00:23:54.574 "prchk_guard": false, 00:23:54.574 "hdgst": false, 00:23:54.574 "ddgst": false, 00:23:54.574 "dhchap_key": "key2", 00:23:54.574 "allow_unrecognized_csi": false, 00:23:54.574 "method": "bdev_nvme_attach_controller", 00:23:54.574 "req_id": 1 00:23:54.574 } 00:23:54.574 Got JSON-RPC error response 00:23:54.574 response: 00:23:54.574 { 00:23:54.574 "code": -5, 00:23:54.574 "message": "Input/output error" 00:23:54.574 } 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.574 request: 00:23:54.574 { 00:23:54.574 "name": "nvme0", 00:23:54.574 "trtype": "tcp", 00:23:54.574 "traddr": "10.0.0.1", 00:23:54.574 "adrfam": "ipv4", 00:23:54.574 "trsvcid": "4420", 00:23:54.574 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:54.574 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:54.574 "prchk_reftag": false, 00:23:54.574 "prchk_guard": false, 00:23:54.574 "hdgst": false, 00:23:54.574 "ddgst": false, 00:23:54.574 "dhchap_key": "key1", 00:23:54.574 "dhchap_ctrlr_key": "ckey2", 00:23:54.574 "allow_unrecognized_csi": false, 00:23:54.574 "method": "bdev_nvme_attach_controller", 00:23:54.574 "req_id": 1 00:23:54.574 } 00:23:54.574 Got JSON-RPC error response 00:23:54.574 response: 00:23:54.574 { 00:23:54.574 "code": -5, 00:23:54.574 "message": "Input/output error" 00:23:54.574 } 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.574 nvme0n1 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.574 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.833 request: 00:23:54.833 { 00:23:54.833 "name": "nvme0", 00:23:54.833 "dhchap_key": "key1", 00:23:54.833 "dhchap_ctrlr_key": "ckey2", 00:23:54.833 "method": "bdev_nvme_set_keys", 00:23:54.833 "req_id": 1 00:23:54.833 } 00:23:54.833 Got JSON-RPC error response 00:23:54.833 response: 00:23:54.833 { 00:23:54.833 "code": -13, 00:23:54.833 "message": "Permission denied" 00:23:54.833 } 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:54.833 05:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:55.768 05:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.768 05:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:55.768 05:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.768 05:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.768 05:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRiN2NjYzVkZjJlNDQ4MzY3ZTg2M2Q1Y2QxY2FjMDQyZTk4N2I4NGIxY2Y4NWNhHY2Esg==: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjE1ZWJlNWI5MzVjNGUwZThiMmJjMTg0N2EyMTY3ZDEyYzA1Zjk0ZWE0YmIwZDEy6qT+Rg==: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.027 nvme0n1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzczNDAxNzU1NDVkMjBjZTBiODNjNTg2YTM5Njk1NmYNNXDj: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGM0N2Q4YzJmYTIzZGMyNzA4ODQyNjJhNDY3NmY2MTCH0rFJ: 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.027 request: 00:23:56.027 { 00:23:56.027 "name": "nvme0", 00:23:56.027 "dhchap_key": "key2", 00:23:56.027 "dhchap_ctrlr_key": "ckey1", 00:23:56.027 "method": "bdev_nvme_set_keys", 00:23:56.027 "req_id": 1 00:23:56.027 } 00:23:56.027 Got JSON-RPC error response 00:23:56.027 response: 00:23:56.027 { 00:23:56.027 "code": -13, 00:23:56.027 "message": "Permission denied" 00:23:56.027 } 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:56.027 05:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.405 rmmod nvme_tcp 00:23:57.405 rmmod nvme_fabrics 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 86291 ']' 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 86291 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 86291 ']' 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 86291 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86291 00:23:57.405 killing process with pid 86291 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86291' 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 86291 00:23:57.405 05:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 86291 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:57.974 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:58.234 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:58.493 05:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:59.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:59.061 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:59.321 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:59.321 05:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pNu /tmp/spdk.key-null.5zG /tmp/spdk.key-sha256.RdB /tmp/spdk.key-sha384.1w7 /tmp/spdk.key-sha512.jE9 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:23:59.321 05:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:59.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:59.580 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:59.580 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:59.842 00:23:59.842 real 0m36.755s 00:23:59.842 user 0m33.734s 00:23:59.842 sys 0m4.208s 00:23:59.842 ************************************ 00:23:59.842 END TEST nvmf_auth_host 00:23:59.842 ************************************ 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.842 ************************************ 00:23:59.842 START TEST nvmf_digest 00:23:59.842 ************************************ 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.842 * Looking for test storage... 00:23:59.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:23:59.842 05:41:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.842 --rc genhtml_branch_coverage=1 00:23:59.842 --rc genhtml_function_coverage=1 00:23:59.842 --rc genhtml_legend=1 00:23:59.842 --rc geninfo_all_blocks=1 00:23:59.842 --rc geninfo_unexecuted_blocks=1 00:23:59.842 00:23:59.842 ' 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.842 --rc genhtml_branch_coverage=1 00:23:59.842 --rc genhtml_function_coverage=1 00:23:59.842 --rc genhtml_legend=1 00:23:59.842 --rc geninfo_all_blocks=1 00:23:59.842 --rc geninfo_unexecuted_blocks=1 00:23:59.842 00:23:59.842 ' 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.842 --rc genhtml_branch_coverage=1 00:23:59.842 --rc genhtml_function_coverage=1 00:23:59.842 --rc genhtml_legend=1 00:23:59.842 --rc geninfo_all_blocks=1 00:23:59.842 --rc geninfo_unexecuted_blocks=1 00:23:59.842 00:23:59.842 ' 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.842 --rc genhtml_branch_coverage=1 00:23:59.842 --rc genhtml_function_coverage=1 00:23:59.842 --rc genhtml_legend=1 00:23:59.842 --rc geninfo_all_blocks=1 00:23:59.842 --rc geninfo_unexecuted_blocks=1 00:23:59.842 00:23:59.842 ' 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:59.842 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.105 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:00.105 Cannot find device "nvmf_init_br" 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:00.105 Cannot find device "nvmf_init_br2" 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:00.105 Cannot find device "nvmf_tgt_br" 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.105 Cannot find device "nvmf_tgt_br2" 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:00.105 Cannot find device "nvmf_init_br" 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:00.105 Cannot find device "nvmf_init_br2" 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:00.105 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:00.105 Cannot find device "nvmf_tgt_br" 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:00.106 Cannot find device "nvmf_tgt_br2" 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:00.106 Cannot find device "nvmf_br" 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:00.106 Cannot find device "nvmf_init_if" 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:00.106 Cannot find device "nvmf_init_if2" 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:00.106 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:00.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:24:00.365 00:24:00.365 --- 10.0.0.3 ping statistics --- 00:24:00.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.365 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:00.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:00.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:24:00.365 00:24:00.365 --- 10.0.0.4 ping statistics --- 00:24:00.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.365 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:24:00.365 00:24:00.365 --- 10.0.0.1 ping statistics --- 00:24:00.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.365 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:00.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:24:00.365 00:24:00.365 --- 10.0.0.2 ping statistics --- 00:24:00.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.365 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.365 ************************************ 00:24:00.365 START TEST nvmf_digest_clean 00:24:00.365 ************************************ 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:00.365 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=87928 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 87928 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87928 ']' 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.366 05:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.625 [2024-12-16 05:41:40.730515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:00.625 [2024-12-16 05:41:40.731275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.884 [2024-12-16 05:41:40.923871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.884 [2024-12-16 05:41:41.046917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.884 [2024-12-16 05:41:41.046998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.884 [2024-12-16 05:41:41.047024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.884 [2024-12-16 05:41:41.047056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.884 [2024-12-16 05:41:41.047075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.884 [2024-12-16 05:41:41.048527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.821 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.821 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.822 05:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.822 [2024-12-16 05:41:41.902357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:01.822 null0 00:24:01.822 [2024-12-16 05:41:42.003709] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.822 [2024-12-16 05:41:42.027890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87966 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87966 /var/tmp/bperf.sock 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 87966 ']' 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.822 05:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.080 [2024-12-16 05:41:42.169569] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:02.080 [2024-12-16 05:41:42.169772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87966 ] 00:24:02.338 [2024-12-16 05:41:42.355735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.338 [2024-12-16 05:41:42.478926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.905 05:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.905 05:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:02.905 05:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:02.905 05:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:02.905 05:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:03.472 [2024-12-16 05:41:43.555736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:03.472 05:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:03.472 05:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.040 nvme0n1 00:24:04.040 05:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:04.040 05:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:04.040 Running I/O for 2 seconds... 00:24:05.914 14605.00 IOPS, 57.05 MiB/s [2024-12-16T05:41:46.173Z] 14605.00 IOPS, 57.05 MiB/s 00:24:05.914 Latency(us) 00:24:05.914 [2024-12-16T05:41:46.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.914 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:05.914 nvme0n1 : 2.01 14631.11 57.15 0.00 0.00 8741.95 8281.37 21209.83 00:24:05.914 [2024-12-16T05:41:46.173Z] =================================================================================================================== 00:24:05.914 [2024-12-16T05:41:46.173Z] Total : 14631.11 57.15 0.00 0.00 8741.95 8281.37 21209.83 00:24:05.914 { 00:24:05.914 "results": [ 00:24:05.914 { 00:24:05.914 "job": "nvme0n1", 00:24:05.914 "core_mask": "0x2", 00:24:05.914 "workload": "randread", 00:24:05.914 "status": "finished", 00:24:05.914 "queue_depth": 128, 00:24:05.914 "io_size": 4096, 00:24:05.914 "runtime": 2.00518, 00:24:05.914 "iops": 14631.105436918382, 00:24:05.914 "mibps": 57.15275561296243, 00:24:05.914 "io_failed": 0, 00:24:05.914 "io_timeout": 0, 00:24:05.914 "avg_latency_us": 8741.945730451975, 00:24:05.914 "min_latency_us": 8281.367272727273, 00:24:05.914 "max_latency_us": 21209.832727272726 00:24:05.914 } 00:24:05.914 ], 00:24:05.914 "core_count": 1 00:24:05.914 } 00:24:05.914 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:05.914 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:05.914 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:05.914 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:05.914 | select(.opcode=="crc32c") 00:24:05.914 | "\(.module_name) \(.executed)"' 00:24:05.914 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87966 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87966 ']' 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87966 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.173 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87966 00:24:06.432 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.432 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.432 killing process with pid 87966 00:24:06.432 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87966' 00:24:06.432 Received shutdown signal, test time was about 2.000000 seconds 00:24:06.432 00:24:06.432 Latency(us) 00:24:06.432 [2024-12-16T05:41:46.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.432 [2024-12-16T05:41:46.691Z] =================================================================================================================== 00:24:06.432 [2024-12-16T05:41:46.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.432 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87966 00:24:06.432 05:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87966 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88033 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88033 /var/tmp/bperf.sock 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 88033 ']' 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.000 05:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:07.260 [2024-12-16 05:41:47.406273] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:07.260 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:07.260 Zero copy mechanism will not be used. 00:24:07.260 [2024-12-16 05:41:47.406453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88033 ] 00:24:07.519 [2024-12-16 05:41:47.577361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.519 [2024-12-16 05:41:47.667654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.455 05:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.455 05:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:08.455 05:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:08.455 05:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:08.455 05:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:08.714 [2024-12-16 05:41:48.762978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:08.714 05:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.714 05:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:08.973 nvme0n1 00:24:08.973 05:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:08.973 05:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.232 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:09.232 Zero copy mechanism will not be used. 00:24:09.232 Running I/O for 2 seconds... 00:24:11.107 7008.00 IOPS, 876.00 MiB/s [2024-12-16T05:41:51.366Z] 7072.00 IOPS, 884.00 MiB/s 00:24:11.107 Latency(us) 00:24:11.107 [2024-12-16T05:41:51.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.107 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:11.107 nvme0n1 : 2.00 7070.07 883.76 0.00 0.00 2259.34 2040.55 11260.28 00:24:11.107 [2024-12-16T05:41:51.366Z] =================================================================================================================== 00:24:11.107 [2024-12-16T05:41:51.366Z] Total : 7070.07 883.76 0.00 0.00 2259.34 2040.55 11260.28 00:24:11.107 { 00:24:11.107 "results": [ 00:24:11.107 { 00:24:11.107 "job": "nvme0n1", 00:24:11.107 "core_mask": "0x2", 00:24:11.107 "workload": "randread", 00:24:11.107 "status": "finished", 00:24:11.107 "queue_depth": 16, 00:24:11.107 "io_size": 131072, 00:24:11.107 "runtime": 2.002809, 00:24:11.107 "iops": 7070.070086563422, 00:24:11.107 "mibps": 883.7587608204277, 00:24:11.107 "io_failed": 0, 00:24:11.107 "io_timeout": 0, 00:24:11.107 "avg_latency_us": 2259.3447642526967, 00:24:11.107 "min_latency_us": 2040.5527272727272, 00:24:11.107 "max_latency_us": 11260.276363636363 00:24:11.107 } 00:24:11.107 ], 00:24:11.107 "core_count": 1 00:24:11.107 } 00:24:11.366 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:11.366 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:11.366 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:11.366 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:11.366 | select(.opcode=="crc32c") 00:24:11.366 | "\(.module_name) \(.executed)"' 00:24:11.366 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88033 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 88033 ']' 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 88033 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88033 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.625 killing process with pid 88033 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88033' 00:24:11.625 Received shutdown signal, test time was about 2.000000 seconds 00:24:11.625 00:24:11.625 Latency(us) 00:24:11.625 [2024-12-16T05:41:51.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.625 [2024-12-16T05:41:51.884Z] =================================================================================================================== 00:24:11.625 [2024-12-16T05:41:51.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 88033 00:24:11.625 05:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 88033 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88101 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88101 /var/tmp/bperf.sock 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 88101 ']' 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.602 05:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.602 [2024-12-16 05:41:52.609832] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:12.602 [2024-12-16 05:41:52.610004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88101 ] 00:24:12.602 [2024-12-16 05:41:52.788512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.861 [2024-12-16 05:41:52.870779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.428 05:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.428 05:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:13.428 05:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:13.428 05:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:13.428 05:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:13.687 [2024-12-16 05:41:53.879290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:13.945 05:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.945 05:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.204 nvme0n1 00:24:14.204 05:41:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:14.204 05:41:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.204 Running I/O for 2 seconds... 00:24:16.147 15622.00 IOPS, 61.02 MiB/s [2024-12-16T05:41:56.665Z] 15621.50 IOPS, 61.02 MiB/s 00:24:16.406 Latency(us) 00:24:16.406 [2024-12-16T05:41:56.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.406 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:16.406 nvme0n1 : 2.01 15592.55 60.91 0.00 0.00 8201.83 4527.94 21924.77 00:24:16.406 [2024-12-16T05:41:56.665Z] =================================================================================================================== 00:24:16.406 [2024-12-16T05:41:56.665Z] Total : 15592.55 60.91 0.00 0.00 8201.83 4527.94 21924.77 00:24:16.406 { 00:24:16.406 "results": [ 00:24:16.406 { 00:24:16.406 "job": "nvme0n1", 00:24:16.406 "core_mask": "0x2", 00:24:16.406 "workload": "randwrite", 00:24:16.406 "status": "finished", 00:24:16.406 "queue_depth": 128, 00:24:16.406 "io_size": 4096, 00:24:16.406 "runtime": 2.011922, 00:24:16.406 "iops": 15592.552792802107, 00:24:16.406 "mibps": 60.90840934688323, 00:24:16.406 "io_failed": 0, 00:24:16.406 "io_timeout": 0, 00:24:16.406 "avg_latency_us": 8201.826668173559, 00:24:16.406 "min_latency_us": 4527.941818181818, 00:24:16.406 "max_latency_us": 21924.77090909091 00:24:16.406 } 00:24:16.406 ], 00:24:16.406 "core_count": 1 00:24:16.406 } 00:24:16.406 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:16.406 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:16.406 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:16.406 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:16.406 | select(.opcode=="crc32c") 00:24:16.406 | "\(.module_name) \(.executed)"' 00:24:16.406 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88101 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 88101 ']' 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 88101 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.664 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88101 00:24:16.664 killing process with pid 88101 00:24:16.664 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.664 00:24:16.664 Latency(us) 00:24:16.664 [2024-12-16T05:41:56.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.665 [2024-12-16T05:41:56.924Z] =================================================================================================================== 00:24:16.665 [2024-12-16T05:41:56.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.665 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:16.665 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:16.665 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88101' 00:24:16.665 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 88101 00:24:16.665 05:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 88101 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=88176 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 88176 /var/tmp/bperf.sock 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 88176 ']' 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.601 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.602 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.602 05:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:17.602 [2024-12-16 05:41:57.700369] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:17.602 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.602 Zero copy mechanism will not be used. 00:24:17.602 [2024-12-16 05:41:57.700621] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88176 ] 00:24:17.861 [2024-12-16 05:41:57.875009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.861 [2024-12-16 05:41:57.955491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.428 05:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.428 05:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:18.428 05:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:18.428 05:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:18.428 05:41:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:18.996 [2024-12-16 05:41:58.959029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:18.996 05:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.996 05:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.255 nvme0n1 00:24:19.255 05:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:19.255 05:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.255 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:19.255 Zero copy mechanism will not be used. 00:24:19.255 Running I/O for 2 seconds... 00:24:21.568 5471.00 IOPS, 683.88 MiB/s [2024-12-16T05:42:01.827Z] 5413.00 IOPS, 676.62 MiB/s 00:24:21.568 Latency(us) 00:24:21.568 [2024-12-16T05:42:01.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.568 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:21.568 nvme0n1 : 2.00 5411.63 676.45 0.00 0.00 2949.34 1623.51 4468.36 00:24:21.568 [2024-12-16T05:42:01.827Z] =================================================================================================================== 00:24:21.568 [2024-12-16T05:42:01.827Z] Total : 5411.63 676.45 0.00 0.00 2949.34 1623.51 4468.36 00:24:21.568 { 00:24:21.568 "results": [ 00:24:21.568 { 00:24:21.568 "job": "nvme0n1", 00:24:21.568 "core_mask": "0x2", 00:24:21.568 "workload": "randwrite", 00:24:21.568 "status": "finished", 00:24:21.568 "queue_depth": 16, 00:24:21.568 "io_size": 131072, 00:24:21.568 "runtime": 2.004387, 00:24:21.568 "iops": 5411.629590493253, 00:24:21.568 "mibps": 676.4536988116566, 00:24:21.568 "io_failed": 0, 00:24:21.568 "io_timeout": 0, 00:24:21.568 "avg_latency_us": 2949.3362709421126, 00:24:21.568 "min_latency_us": 1623.5054545454545, 00:24:21.568 "max_latency_us": 4468.363636363636 00:24:21.568 } 00:24:21.568 ], 00:24:21.568 "core_count": 1 00:24:21.568 } 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:21.568 | select(.opcode=="crc32c") 00:24:21.568 | "\(.module_name) \(.executed)"' 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 88176 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 88176 ']' 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 88176 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88176 00:24:21.568 killing process with pid 88176 00:24:21.568 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.568 00:24:21.568 Latency(us) 00:24:21.568 [2024-12-16T05:42:01.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.568 [2024-12-16T05:42:01.827Z] =================================================================================================================== 00:24:21.568 [2024-12-16T05:42:01.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88176' 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 88176 00:24:21.568 05:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 88176 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 87928 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 87928 ']' 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 87928 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87928 00:24:22.506 killing process with pid 87928 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87928' 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 87928 00:24:22.506 05:42:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 87928 00:24:23.442 00:24:23.442 real 0m22.968s 00:24:23.442 user 0m44.378s 00:24:23.442 sys 0m4.687s 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.442 ************************************ 00:24:23.442 END TEST nvmf_digest_clean 00:24:23.442 ************************************ 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.442 ************************************ 00:24:23.442 START TEST nvmf_digest_error 00:24:23.442 ************************************ 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=88274 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 88274 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88274 ']' 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.442 05:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.701 [2024-12-16 05:42:03.752065] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:23.701 [2024-12-16 05:42:03.752242] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.701 [2024-12-16 05:42:03.928498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.960 [2024-12-16 05:42:04.011293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.960 [2024-12-16 05:42:04.011363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.960 [2024-12-16 05:42:04.011396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.960 [2024-12-16 05:42:04.011418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.960 [2024-12-16 05:42:04.011431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.960 [2024-12-16 05:42:04.012514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.528 [2024-12-16 05:42:04.677278] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.528 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.788 [2024-12-16 05:42:04.828695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:24.788 null0 00:24:24.788 [2024-12-16 05:42:04.929702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.788 [2024-12-16 05:42:04.953911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88306 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88306 /var/tmp/bperf.sock 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88306 ']' 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:24.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.788 05:42:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.046 [2024-12-16 05:42:05.067990] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:25.047 [2024-12-16 05:42:05.068398] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88306 ] 00:24:25.047 [2024-12-16 05:42:05.258019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.304 [2024-12-16 05:42:05.381198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.304 [2024-12-16 05:42:05.537677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:25.872 05:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.872 05:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:25.872 05:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.872 05:42:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:26.131 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:26.131 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.131 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.131 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.131 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.131 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.390 nvme0n1 00:24:26.390 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:26.390 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.390 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.390 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.390 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:26.390 05:42:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:26.649 Running I/O for 2 seconds... 00:24:26.649 [2024-12-16 05:42:06.716688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.716766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.716788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.734332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.734543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.734574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.751687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.751904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.769352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.769419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.769439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.787970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.788192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.788224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.805751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.805812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.805833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.823119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.823188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.823207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.840359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.840602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.840650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.857909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.857970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.857991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.875219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.875288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.875307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.649 [2024-12-16 05:42:06.893435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.649 [2024-12-16 05:42:06.893497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.649 [2024-12-16 05:42:06.893519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.909 [2024-12-16 05:42:06.912744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.909 [2024-12-16 05:42:06.912811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.909 [2024-12-16 05:42:06.912830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.909 [2024-12-16 05:42:06.930198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.909 [2024-12-16 05:42:06.930398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.909 [2024-12-16 05:42:06.930422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.909 [2024-12-16 05:42:06.947579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.909 [2024-12-16 05:42:06.947664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.909 [2024-12-16 05:42:06.947685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.909 [2024-12-16 05:42:06.964897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.909 [2024-12-16 05:42:06.964964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.909 [2024-12-16 05:42:06.964983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.909 [2024-12-16 05:42:06.982230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.909 [2024-12-16 05:42:06.982430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.909 [2024-12-16 05:42:06.982454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.909 [2024-12-16 05:42:06.999776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.909 [2024-12-16 05:42:06.999955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.909 [2024-12-16 05:42:06.999998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.909 [2024-12-16 05:42:07.017319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.909 [2024-12-16 05:42:07.017384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.909 [2024-12-16 05:42:07.017404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.034650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.034715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.034733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.051911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.051972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.052008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.069259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.069325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.069344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.086480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.086710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.086735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.104471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.104547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.104568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.121977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.122045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.122064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.139367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.139433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.139453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.910 [2024-12-16 05:42:07.156977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:26.910 [2024-12-16 05:42:07.157037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.910 [2024-12-16 05:42:07.157058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.175904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.175971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.176003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.193320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.193379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.193400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.210694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.210754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.210774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.228003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.228214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.228238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.245772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.245840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.245862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.263027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.263086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.263108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.284209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.284269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.284291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.303850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.303908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.303932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.321233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.321295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.321313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.338748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.338805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.338825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.356003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.356058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.356078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.373318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.373380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.373398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.390465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.390521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.390541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-12-16 05:42:07.407804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.169 [2024-12-16 05:42:07.407859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-12-16 05:42:07.407878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.170 [2024-12-16 05:42:07.425593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.170 [2024-12-16 05:42:07.425688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.170 [2024-12-16 05:42:07.425708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.428 [2024-12-16 05:42:07.443571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.443637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.443658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.461256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.461312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.461332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.478507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.478568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.478585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.495668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.495723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.495743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.512921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.512972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.512997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.530129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.530189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.530207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.547275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.547331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.547351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.564511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.564567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.564586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.582242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.582290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.582309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.602155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.602215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.602239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.622000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.622079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.622097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.640489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.640546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.640567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.658626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.658688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.658706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-12-16 05:42:07.676942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.429 [2024-12-16 05:42:07.677003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-12-16 05:42:07.677021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 14169.00 IOPS, 55.35 MiB/s [2024-12-16T05:42:07.947Z] [2024-12-16 05:42:07.696564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.696628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.696650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.714959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.715024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.715043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.733263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.733319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.733339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.751738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.751799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.751817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.770079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.770141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.770159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.788309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.788368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.788414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.806628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.806692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.806710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.825122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.825188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.825210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.851361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.851422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.851440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.868718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.868775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.868794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.886093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.886154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.886172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.903229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.903295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.903313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.920603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.920670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.920691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-12-16 05:42:07.937805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.688 [2024-12-16 05:42:07.937865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-12-16 05:42:07.937883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:07.956388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:07.956512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:07.956529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:07.973803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:07.973860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:07.973883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:07.991068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:07.991128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:07.991146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.008256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.008319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.008337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.025576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.025642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.025662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.042824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.042885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.042902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.060066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.060184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.060203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.077323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.077378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.077398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.094439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.094502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.094520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.111631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.111693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.111711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.128917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.128971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.128990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.146075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.146135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.146153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.163411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.163474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.163491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.180787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.180842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.180861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-12-16 05:42:08.197970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:27.948 [2024-12-16 05:42:08.198047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-12-16 05:42:08.198064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.216812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.216876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.216893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.234136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.234191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.234211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.251390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.251451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.251468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.268695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.268758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.268775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.286099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.286139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.286159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.307143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.307204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.307221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.326767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.326826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.326847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.344837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.207 [2024-12-16 05:42:08.344889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.207 [2024-12-16 05:42:08.344906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.207 [2024-12-16 05:42:08.364406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.208 [2024-12-16 05:42:08.364469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.208 [2024-12-16 05:42:08.364489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.208 [2024-12-16 05:42:08.384117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.208 [2024-12-16 05:42:08.384209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.208 [2024-12-16 05:42:08.384228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.208 [2024-12-16 05:42:08.401501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.208 [2024-12-16 05:42:08.401557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.208 [2024-12-16 05:42:08.401574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.208 [2024-12-16 05:42:08.418721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.208 [2024-12-16 05:42:08.418776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.208 [2024-12-16 05:42:08.418793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.208 [2024-12-16 05:42:08.435895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.208 [2024-12-16 05:42:08.435951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.208 [2024-12-16 05:42:08.435967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.208 [2024-12-16 05:42:08.453255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.208 [2024-12-16 05:42:08.453312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.208 [2024-12-16 05:42:08.453328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.472321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.472383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.472416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.489771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.489828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.489845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.507010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.507065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.507081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.524306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.524365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.524383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.541817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.541873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.541890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.559159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.559214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.559231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.576731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.576788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.576805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.594063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.594118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.594134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.611208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.611263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.611279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.628568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.628633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.467 [2024-12-16 05:42:08.628649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.467 [2024-12-16 05:42:08.645724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.467 [2024-12-16 05:42:08.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.468 [2024-12-16 05:42:08.645798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.468 [2024-12-16 05:42:08.663022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.468 [2024-12-16 05:42:08.663078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.468 [2024-12-16 05:42:08.663093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.468 [2024-12-16 05:42:08.680513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.468 [2024-12-16 05:42:08.680569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.468 [2024-12-16 05:42:08.680585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.468 14232.00 IOPS, 55.59 MiB/s [2024-12-16T05:42:08.727Z] [2024-12-16 05:42:08.697914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:28.468 [2024-12-16 05:42:08.697970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.468 [2024-12-16 05:42:08.698003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.468 00:24:28.468 Latency(us) 00:24:28.468 [2024-12-16T05:42:08.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.468 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:28.468 nvme0n1 : 2.01 14237.12 55.61 0.00 0.00 8982.66 7298.33 34078.72 00:24:28.468 [2024-12-16T05:42:08.727Z] =================================================================================================================== 00:24:28.468 [2024-12-16T05:42:08.727Z] Total : 14237.12 55.61 0.00 0.00 8982.66 7298.33 34078.72 00:24:28.468 { 00:24:28.468 "results": [ 00:24:28.468 { 00:24:28.468 "job": "nvme0n1", 00:24:28.468 "core_mask": "0x2", 00:24:28.468 "workload": "randread", 00:24:28.468 "status": "finished", 00:24:28.468 "queue_depth": 128, 00:24:28.468 "io_size": 4096, 00:24:28.468 "runtime": 2.008272, 00:24:28.468 "iops": 14237.11529115578, 00:24:28.468 "mibps": 55.613731606077266, 00:24:28.468 "io_failed": 0, 00:24:28.468 "io_timeout": 0, 00:24:28.468 "avg_latency_us": 8982.658025130997, 00:24:28.468 "min_latency_us": 7298.327272727272, 00:24:28.468 "max_latency_us": 34078.72 00:24:28.468 } 00:24:28.468 ], 00:24:28.468 "core_count": 1 00:24:28.468 } 00:24:28.468 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:28.468 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:28.727 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:28.727 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:28.727 | .driver_specific 00:24:28.727 | .nvme_error 00:24:28.727 | .status_code 00:24:28.727 | .command_transient_transport_error' 00:24:28.986 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:24:28.986 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88306 00:24:28.986 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88306 ']' 00:24:28.986 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88306 00:24:28.986 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:28.986 05:42:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.986 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88306 00:24:28.986 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.986 killing process with pid 88306 00:24:28.986 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.986 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88306' 00:24:28.986 Received shutdown signal, test time was about 2.000000 seconds 00:24:28.986 00:24:28.986 Latency(us) 00:24:28.986 [2024-12-16T05:42:09.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.986 [2024-12-16T05:42:09.245Z] =================================================================================================================== 00:24:28.986 [2024-12-16T05:42:09.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.986 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88306 00:24:28.986 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88306 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88373 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88373 /var/tmp/bperf.sock 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88373 ']' 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.923 05:42:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:29.923 Zero copy mechanism will not be used. 00:24:29.923 [2024-12-16 05:42:09.958330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:29.923 [2024-12-16 05:42:09.958503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88373 ] 00:24:29.923 [2024-12-16 05:42:10.137130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.182 [2024-12-16 05:42:10.225410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.182 [2024-12-16 05:42:10.381684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.750 05:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.750 05:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:30.750 05:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:30.750 05:42:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:31.009 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:31.009 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.009 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.009 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.009 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:31.009 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:31.269 nvme0n1 00:24:31.269 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:31.269 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.269 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.269 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.269 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:31.269 05:42:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:31.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:31.269 Zero copy mechanism will not be used. 00:24:31.269 Running I/O for 2 seconds... 00:24:31.269 [2024-12-16 05:42:11.523543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.269 [2024-12-16 05:42:11.523644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.269 [2024-12-16 05:42:11.523666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.529509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.529571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.529610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.534804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.534863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.534884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.539659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.539722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.539741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.544862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.544926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.544945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.549734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.549791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.549812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.554542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.554616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.554650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.559367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.559429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.559447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.564592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.564682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.564702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.569381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.569438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.569458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.574466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.574523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.574543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.579222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.579284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.579302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.583935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.583996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.584014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.588750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.588805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.529 [2024-12-16 05:42:11.588828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.529 [2024-12-16 05:42:11.593468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.529 [2024-12-16 05:42:11.593524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.593545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.598183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.598245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.598264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.602860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.602922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.602940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.607664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.607719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.607740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.612300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.612366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.612386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.617073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.617135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.621749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.621806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.621827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.626388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.626445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.626466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.631332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.631397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.631415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.636283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.636353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.636374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.641059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.641114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.641134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.645682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.645738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.650325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.650387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.650405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.654983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.655047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.655065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.659648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.659703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.659723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.664323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.664366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.664387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.668925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.668998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.669015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.673736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.673797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.673815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.678508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.678563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.678584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.683114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.683178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.683196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.687791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.687853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.687872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.692606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.692672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.692693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.697231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.697287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.697310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.701919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.701982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.702000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.706639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.706702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.706720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.711240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.711295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.711315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.530 [2024-12-16 05:42:11.715994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.530 [2024-12-16 05:42:11.716051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.530 [2024-12-16 05:42:11.716072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.720708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.720773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.720792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.725302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.725363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.725382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.730037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.730092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.730112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.734693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.734747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.734769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.739235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.739298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.739316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.743889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.743949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.743967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.748528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.748583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.748630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.753234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.753295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.753313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.757853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.757914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.757932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.762478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.762533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.762555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.767127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.767182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.767204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.771764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.771824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.771842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.776436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.776524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.776542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.531 [2024-12-16 05:42:11.781198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.531 [2024-12-16 05:42:11.781253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.531 [2024-12-16 05:42:11.781273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.786514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.786572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.786593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.791431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.791507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.791541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.796369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.796464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.796498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.801095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.801151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.801171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.805775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.805830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.805852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.810401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.810461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.810480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.815203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.815263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.815281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.819811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.819866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.819886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.824362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.824432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.824452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.804 [2024-12-16 05:42:11.829188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.804 [2024-12-16 05:42:11.829251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.804 [2024-12-16 05:42:11.829269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.833876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.833950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.833968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.838542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.838598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.838629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.843091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.843152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.843170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.847612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.847673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.847692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.852212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.852270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.852291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.856940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.856981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.857002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.861559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.861631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.861650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.866113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.866174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.866192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.870802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.870856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.870879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.875448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.875503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.875523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.880114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.880176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.880195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.884943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.885007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.885025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.889511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.889568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.889588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.894101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.894167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.894185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.898694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.898755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.898773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.903331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.903386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.903406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.908000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.908055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.908075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.913113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.913174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.913192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.917701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.917765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.917784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.923384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.923445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.923469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.929260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.929352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.929408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.935379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.935441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.935464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.940997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.941063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.941082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.945888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.945949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.945968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.950541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.950596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.950631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.955141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.955196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.955216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.959846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.959908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.805 [2024-12-16 05:42:11.959926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.805 [2024-12-16 05:42:11.964603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.805 [2024-12-16 05:42:11.964667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.964687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:11.969146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:11.969201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.969221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:11.973852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:11.973915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.973933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:11.978589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:11.978661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.978680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:11.983137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:11.983192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.983214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:11.987889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:11.987945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.987965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:11.992634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:11.992704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.992723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:11.997205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:11.997269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:11.997286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.001861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.001917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.001940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.006552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.006618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.006641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.011076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.011139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.011157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.015659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.015719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.015736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.020242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.020285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.020306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.024862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.024922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.024940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.029563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.029635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.029654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.034105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.034161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.034183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.038720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.038774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.038794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.043241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.043304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.043322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.806 [2024-12-16 05:42:12.049336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:31.806 [2024-12-16 05:42:12.049392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.806 [2024-12-16 05:42:12.049444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.055961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.056059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.056080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.061332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.061409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.066231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.066319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.066355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.071354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.071416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.071434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.076047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.076149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.076180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.080833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.080888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.080908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.085507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.085563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.085583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.090183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.090244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.090262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.094878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.094937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.094958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.099554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.099618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.099641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.104144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.104211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.104232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.075 [2024-12-16 05:42:12.108804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.075 [2024-12-16 05:42:12.108863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.075 [2024-12-16 05:42:12.108881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.113416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.113473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.113494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.118271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.118344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.118362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.123066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.123123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.123140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.127917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.127974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.127991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.132735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.132789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.132806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.137289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.137345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.137363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.141919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.141975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.142009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.146564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.146633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.146650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.151118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.151173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.151190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.155689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.155744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.155760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.160321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.160378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.160397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.164981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.165036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.165052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.169584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.169666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.169683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.174229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.174284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.174301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.178921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.178977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.178993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.183456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.183510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.183527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.188002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.188058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.188075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.192673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.192717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.192736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.197391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.197447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.197464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.201932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.202005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.202023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.206545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.206601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.206632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.211097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.211152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.211169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.215720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.215774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.215791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.220344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.220401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.220434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.225027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.225083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.225100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.229667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.229722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.229740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.234228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.076 [2024-12-16 05:42:12.234284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.076 [2024-12-16 05:42:12.234300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.076 [2024-12-16 05:42:12.238847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.238903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.238919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.243410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.243467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.243484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.248029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.248084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.248127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.252745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.252799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.252817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.257506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.257562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.257579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.262158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.262213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.262231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.266815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.266872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.266889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.273520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.273581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.273629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.279955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.280042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.280062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.284996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.285052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.285070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.289750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.289806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.289823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.294342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.294396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.294413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.299091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.299147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.299164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.303747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.303803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.303819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.308385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.308456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.308489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.313074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.313130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.313147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.317659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.317714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.317733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.322244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.322299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.322315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.077 [2024-12-16 05:42:12.326870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.077 [2024-12-16 05:42:12.326925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.077 [2024-12-16 05:42:12.326943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.332187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.332231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.332250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.337121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.337180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.337198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.342107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.342163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.342180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.346798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.346853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.346870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.351460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.351515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.351532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.356232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.356291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.356310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.360977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.361031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.361049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.365561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.365641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.365659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.370143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.370197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.370214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.374780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.374835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.374851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.379969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.380038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.380056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.385130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.385185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.385203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.390547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.390647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.390668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.396272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.396319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.396339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.401856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.401918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.401968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.407205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.407261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.407278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.412357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.412404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.412424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.417463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.417519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.338 [2024-12-16 05:42:12.417536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.338 [2024-12-16 05:42:12.422519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.338 [2024-12-16 05:42:12.422575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.422592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.427375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.427430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.427447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.431993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.432061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.432077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.436620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.436685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.436702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.441301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.441357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.441374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.445899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.445954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.445971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.450657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.450714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.450731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.455432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.455488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.455506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.460362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.460422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.460457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.465114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.465170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.465186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.469768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.469822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.469839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.474300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.474355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.474371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.478955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.479011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.479028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.483574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.483655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.483673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.488221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.488263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.488281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.492891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.492947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.492964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.497465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.497521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.497539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.502137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.502193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.502210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.506851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.506907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.506923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.511496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.511553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.511570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.516336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.516379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.516397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.339 6448.00 IOPS, 806.00 MiB/s [2024-12-16T05:42:12.598Z] [2024-12-16 05:42:12.522568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.522653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.522672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.527305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.527362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.527379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.532234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.532276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.532294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.537105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.537161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.537179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.541888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.339 [2024-12-16 05:42:12.541945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.339 [2024-12-16 05:42:12.541961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.339 [2024-12-16 05:42:12.546486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.546541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.546557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.551131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.551186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.551203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.555684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.555738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.555756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.560260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.560301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.560318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.564918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.564991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.565007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.570064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.570120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.570137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.575326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.575383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.575400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.581126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.581184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.581201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.586781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.586827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.586847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.340 [2024-12-16 05:42:12.592381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.340 [2024-12-16 05:42:12.592428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.340 [2024-12-16 05:42:12.592475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.600 [2024-12-16 05:42:12.598263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.600 [2024-12-16 05:42:12.598319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.600 [2024-12-16 05:42:12.598336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.600 [2024-12-16 05:42:12.603683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.600 [2024-12-16 05:42:12.603739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.603757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.608741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.608795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.608813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.613729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.613786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.613803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.618679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.618736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.618754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.623617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.623707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.623727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.628982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.629038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.629055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.634038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.634093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.634111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.638812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.638869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.638887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.643597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.643665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.643683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.648300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.648357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.648374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.653199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.653273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.658024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.658081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.658098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.662750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.662806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.662824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.667478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.667534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.667552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.672261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.672304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.672322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.677304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.677361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.677379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.682171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.682227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.682244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.686884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.686941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.686959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.691655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.691711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.691728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.696332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.696374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.696408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.701323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.701379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.701396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.706106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.706163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.706190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.710905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.710962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.710979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.715776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.715832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.715849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.720679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.720733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.601 [2024-12-16 05:42:12.720751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.601 [2024-12-16 05:42:12.725428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.601 [2024-12-16 05:42:12.725485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.725503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.730444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.730501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.730519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.735176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.735233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.735250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.739866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.739922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.739940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.744699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.744754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.744772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.749354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.749410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.749427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.754402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.754475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.759271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.759327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.759344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.764003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.764059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.764076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.768851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.768907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.768924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.773825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.773868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.773886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.778618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.778673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.778689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.783317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.783373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.783390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.788059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.788149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.788168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.792980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.793020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.793039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.797855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.797912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.797929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.802602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.802656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.802673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.807302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.807359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.807376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.812039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.812114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.812148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.817009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.817075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.817093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.821889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.821945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.821962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.826611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.826676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.826693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.831351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.831407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.831424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.836470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.836542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.836560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.841331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.841386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.841402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.602 [2024-12-16 05:42:12.846113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.602 [2024-12-16 05:42:12.846168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.602 [2024-12-16 05:42:12.846185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.603 [2024-12-16 05:42:12.850793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.603 [2024-12-16 05:42:12.850847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.603 [2024-12-16 05:42:12.850864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.603 [2024-12-16 05:42:12.855926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.603 [2024-12-16 05:42:12.856009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.603 [2024-12-16 05:42:12.856038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.861059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.861114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.861130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.866117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.866172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.866189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.870779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.870818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.870834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.875458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.875513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.875529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.880219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.880313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.880334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.885179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.885240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.885258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.889973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.890035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.890054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.894748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.894807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.894824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.899226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.899286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.899304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.903856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.903930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.903949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.908644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.908732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.908752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.863 [2024-12-16 05:42:12.913225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.863 [2024-12-16 05:42:12.913283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.863 [2024-12-16 05:42:12.913301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.917899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.917958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.917976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.922690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.922749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.922783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.927322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.927382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.927400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.932067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.932170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.932191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.936825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.936883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.936901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.941485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.941544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.941562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.946186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.946245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.946262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.950837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.950896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.950915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.955377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.955438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.955456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.960149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.960332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.960359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.965149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.965209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.965227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.969763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.969823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.969841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.974399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.974458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.974476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.979069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.979112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.979146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.983619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.983679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.983697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.988051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.988116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.988163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.992778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.992836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.992854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:12.997369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:12.997429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:12.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.002047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.002106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.002124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.006674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.006732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.006749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.011276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.011335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.011354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.015911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.015972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.015991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.020726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.020784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.020801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.025340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.025400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.025418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.030146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.030339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.030362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.035040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.035100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.035119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.039584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.864 [2024-12-16 05:42:13.039689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.864 [2024-12-16 05:42:13.039708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.864 [2024-12-16 05:42:13.044248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.044310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.044329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.048831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.048889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.048907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.053598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.053657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.053675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.058154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.058214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.058232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.062764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.062823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.062841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.067246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.067305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.067324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.071838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.071899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.071918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.076647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.076718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.076736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.081348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.081407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.081425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.086075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.086134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.086151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.090717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.090775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.090793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.095208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.095267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.095285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.099768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.099827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.099845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.104685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.104754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.104773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.109364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.109422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.109440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.114103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.114300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.114324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:32.865 [2024-12-16 05:42:13.119651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.865 [2024-12-16 05:42:13.119762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.865 [2024-12-16 05:42:13.119784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.124729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.124787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.124805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.129888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.129946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.129964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.134564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.134652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.134670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.139255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.139314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.139332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.143832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.143894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.143912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.148648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.148735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.148754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.153299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.153359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.153376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.157921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.157979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.157998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.162536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.162596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.162645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.167175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.167366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.167390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.172152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.172199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.172219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.176858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.176918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.176936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.181629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.181698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.181716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.186215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.186274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.186292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.190896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.190955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.190974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.195461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.195520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.195538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.200067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.200148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.204820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.204880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.204898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.209497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.209556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.209574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.214279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.214338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.214356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.218954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.218999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.219017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.223638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.223696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.223714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.228230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.126 [2024-12-16 05:42:13.228279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.126 [2024-12-16 05:42:13.228298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.126 [2024-12-16 05:42:13.232942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.233046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.233065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.237551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.237640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.237660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.242175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.242234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.242252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.246774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.246832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.246850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.251322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.251381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.251398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.255971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.256046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.256064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.260648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.260716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.260734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.265344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.265404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.265422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.270019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.270078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.270096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.274543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.274776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.274800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.279443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.279505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.279523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.284174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.284223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.284241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.288771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.288831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.288851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.293374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.293434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.293452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.298076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.298272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.298296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.302892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.302959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.302988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.307508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.307552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.307586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.312250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.312312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.312331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.316982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.317041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.317059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.321668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.321728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.321747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.326308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.326366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.326384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.330987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.331046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.331065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.335624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.335683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.335701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.340208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.340270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.340289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.344914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.344974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.345008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.349479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.349538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.349556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.354124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.354184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.127 [2024-12-16 05:42:13.354201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.127 [2024-12-16 05:42:13.358814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.127 [2024-12-16 05:42:13.358873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.128 [2024-12-16 05:42:13.358891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.128 [2024-12-16 05:42:13.363434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.128 [2024-12-16 05:42:13.363493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.128 [2024-12-16 05:42:13.363512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.128 [2024-12-16 05:42:13.368081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.128 [2024-12-16 05:42:13.368181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.128 [2024-12-16 05:42:13.368200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.128 [2024-12-16 05:42:13.372766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.128 [2024-12-16 05:42:13.372826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.128 [2024-12-16 05:42:13.372845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.128 [2024-12-16 05:42:13.377390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.128 [2024-12-16 05:42:13.377449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.128 [2024-12-16 05:42:13.377467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.382758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.382834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.382853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.387619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.387692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.392658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.392748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.392766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.397472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.397533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.397551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.402826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.402888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.402908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.407941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.408003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.408038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.413355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.413417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.413435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.419038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.419099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.419118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.424885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.424950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.425019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.430195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.430387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.430412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.436081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.436191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.436213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.441834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.441914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.441934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.447457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.447520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.447539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.453035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.453257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.388 [2024-12-16 05:42:13.453282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.388 [2024-12-16 05:42:13.458675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.388 [2024-12-16 05:42:13.458779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.458813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.464365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.464642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.464671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.470213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.470276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.470296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.475748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.475820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.475838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.481111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.481173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.481208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.486772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.486848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.486867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.492520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.492583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.492644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.498028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.498247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.498287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.503782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.503841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.503859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.508514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.508573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.508591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.513270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.513462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.513485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:33.389 [2024-12-16 05:42:13.518094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.389 [2024-12-16 05:42:13.518300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.389 [2024-12-16 05:42:13.518429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:33.389 6409.00 IOPS, 801.12 MiB/s 00:24:33.389 Latency(us) 00:24:33.389 [2024-12-16T05:42:13.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.389 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:33.389 nvme0n1 : 2.00 6406.93 800.87 0.00 0.00 2493.40 2100.13 7179.17 00:24:33.389 [2024-12-16T05:42:13.648Z] =================================================================================================================== 00:24:33.389 [2024-12-16T05:42:13.648Z] Total : 6406.93 800.87 0.00 0.00 2493.40 2100.13 7179.17 00:24:33.389 { 00:24:33.389 "results": [ 00:24:33.389 { 00:24:33.389 "job": "nvme0n1", 00:24:33.389 "core_mask": "0x2", 00:24:33.389 "workload": "randread", 00:24:33.389 "status": "finished", 00:24:33.389 "queue_depth": 16, 00:24:33.389 "io_size": 131072, 00:24:33.389 "runtime": 2.003142, 00:24:33.389 "iops": 6406.934705577538, 00:24:33.389 "mibps": 800.8668381971922, 00:24:33.389 "io_failed": 0, 00:24:33.389 "io_timeout": 0, 00:24:33.389 "avg_latency_us": 2493.4019193335885, 00:24:33.389 "min_latency_us": 2100.130909090909, 00:24:33.389 "max_latency_us": 7179.170909090909 00:24:33.389 } 00:24:33.389 ], 00:24:33.389 "core_count": 1 00:24:33.389 } 00:24:33.389 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:33.389 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:33.389 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:33.389 | .driver_specific 00:24:33.389 | .nvme_error 00:24:33.389 | .status_code 00:24:33.389 | .command_transient_transport_error' 00:24:33.389 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 414 > 0 )) 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88373 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88373 ']' 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88373 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88373 00:24:33.648 killing process with pid 88373 00:24:33.648 Received shutdown signal, test time was about 2.000000 seconds 00:24:33.648 00:24:33.648 Latency(us) 00:24:33.648 [2024-12-16T05:42:13.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.648 [2024-12-16T05:42:13.907Z] =================================================================================================================== 00:24:33.648 [2024-12-16T05:42:13.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88373' 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88373 00:24:33.648 05:42:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88373 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88440 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88440 /var/tmp/bperf.sock 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88440 ']' 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.586 05:42:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.586 [2024-12-16 05:42:14.817055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:34.586 [2024-12-16 05:42:14.817507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88440 ] 00:24:34.845 [2024-12-16 05:42:14.997780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.845 [2024-12-16 05:42:15.078111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.104 [2024-12-16 05:42:15.233088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:35.670 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.670 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:35.670 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:35.670 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:35.929 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:35.929 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.929 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:35.929 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.929 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.929 05:42:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.188 nvme0n1 00:24:36.188 05:42:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:36.188 05:42:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.188 05:42:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:36.188 05:42:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.188 05:42:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:36.188 05:42:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:36.188 Running I/O for 2 seconds... 00:24:36.189 [2024-12-16 05:42:16.390880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:24:36.189 [2024-12-16 05:42:16.392901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.189 [2024-12-16 05:42:16.393006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.189 [2024-12-16 05:42:16.408878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:24:36.189 [2024-12-16 05:42:16.410613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.189 [2024-12-16 05:42:16.410687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.189 [2024-12-16 05:42:16.425922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:24:36.189 [2024-12-16 05:42:16.427914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.189 [2024-12-16 05:42:16.427955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:36.189 [2024-12-16 05:42:16.443519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:24:36.189 [2024-12-16 05:42:16.445556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.189 [2024-12-16 05:42:16.445637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.462106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:24:36.448 [2024-12-16 05:42:16.463801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.463999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.480914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:24:36.448 [2024-12-16 05:42:16.482829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.482878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.501144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:24:36.448 [2024-12-16 05:42:16.502908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.502971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.520035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:24:36.448 [2024-12-16 05:42:16.521719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.521762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.537269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:24:36.448 [2024-12-16 05:42:16.538926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.538976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.554449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:24:36.448 [2024-12-16 05:42:16.556271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.556317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.571785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:24:36.448 [2024-12-16 05:42:16.573375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.573450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.589208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:24:36.448 [2024-12-16 05:42:16.590798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.590838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.606870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:24:36.448 [2024-12-16 05:42:16.608502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.608556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.623393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:24:36.448 [2024-12-16 05:42:16.625041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.625095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.639910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:24:36.448 [2024-12-16 05:42:16.641366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.641428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.656198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:24:36.448 [2024-12-16 05:42:16.657638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.657722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.680041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:24:36.448 [2024-12-16 05:42:16.682762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.682823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:36.448 [2024-12-16 05:42:16.696280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:24:36.448 [2024-12-16 05:42:16.698919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.448 [2024-12-16 05:42:16.698977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.714049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:24:36.708 [2024-12-16 05:42:16.716755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.716807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.730535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:24:36.708 [2024-12-16 05:42:16.733248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.733302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.746913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:24:36.708 [2024-12-16 05:42:16.749600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.749660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.763159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:24:36.708 [2024-12-16 05:42:16.765763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.765823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.779468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:24:36.708 [2024-12-16 05:42:16.782061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.782122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.795766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:24:36.708 [2024-12-16 05:42:16.798226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.798280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.812094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:24:36.708 [2024-12-16 05:42:16.814590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.814650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.828322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:24:36.708 [2024-12-16 05:42:16.830789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.830848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.844813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:24:36.708 [2024-12-16 05:42:16.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.847291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.861276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:24:36.708 [2024-12-16 05:42:16.863685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.708 [2024-12-16 05:42:16.863745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:36.708 [2024-12-16 05:42:16.877805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:24:36.708 [2024-12-16 05:42:16.880246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.709 [2024-12-16 05:42:16.880286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:36.709 [2024-12-16 05:42:16.894115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:24:36.709 [2024-12-16 05:42:16.896618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.709 [2024-12-16 05:42:16.896660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:36.709 [2024-12-16 05:42:16.910361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:24:36.709 [2024-12-16 05:42:16.912870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.709 [2024-12-16 05:42:16.912916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:36.709 [2024-12-16 05:42:16.926750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:24:36.709 [2024-12-16 05:42:16.929158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.709 [2024-12-16 05:42:16.929216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:36.709 [2024-12-16 05:42:16.942937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:24:36.709 [2024-12-16 05:42:16.945377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.709 [2024-12-16 05:42:16.945430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:36.709 [2024-12-16 05:42:16.959350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:24:36.709 [2024-12-16 05:42:16.961978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.709 [2024-12-16 05:42:16.962047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:36.968 [2024-12-16 05:42:16.977461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:24:36.968 [2024-12-16 05:42:16.979810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.968 [2024-12-16 05:42:16.979863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:16.993979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:24:36.969 [2024-12-16 05:42:16.996298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:16.996347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.010316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:24:36.969 [2024-12-16 05:42:17.012755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.012817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.026698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:24:36.969 [2024-12-16 05:42:17.028999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.029053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.043022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:24:36.969 [2024-12-16 05:42:17.045314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.045368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.059268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:24:36.969 [2024-12-16 05:42:17.061490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.061550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.075464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:24:36.969 [2024-12-16 05:42:17.077713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.077759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.092142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:24:36.969 [2024-12-16 05:42:17.094364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.094422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.108745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:24:36.969 [2024-12-16 05:42:17.110976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.111045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.125336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:24:36.969 [2024-12-16 05:42:17.127478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.127532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.141751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:24:36.969 [2024-12-16 05:42:17.143813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.143858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.157963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:24:36.969 [2024-12-16 05:42:17.160131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.160187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.174339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:24:36.969 [2024-12-16 05:42:17.176533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.176587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.190697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:24:36.969 [2024-12-16 05:42:17.192782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.192821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.206835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:24:36.969 [2024-12-16 05:42:17.208911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.208949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:36.969 [2024-12-16 05:42:17.223475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:24:36.969 [2024-12-16 05:42:17.225819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.969 [2024-12-16 05:42:17.225869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:37.228 [2024-12-16 05:42:17.241032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:24:37.228 [2024-12-16 05:42:17.243051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.228 [2024-12-16 05:42:17.243109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:37.228 [2024-12-16 05:42:17.257664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:24:37.228 [2024-12-16 05:42:17.259564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.259631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.273869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:24:37.229 [2024-12-16 05:42:17.275791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.275845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.290245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:24:37.229 [2024-12-16 05:42:17.292251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.292291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.306484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:24:37.229 [2024-12-16 05:42:17.308602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.308670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.322945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:24:37.229 [2024-12-16 05:42:17.324917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.324976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.339228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:24:37.229 [2024-12-16 05:42:17.341250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.341304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.355881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:24:37.229 [2024-12-16 05:42:17.357731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.357786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:37.229 14929.00 IOPS, 58.32 MiB/s [2024-12-16T05:42:17.488Z] [2024-12-16 05:42:17.372223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:24:37.229 [2024-12-16 05:42:17.374106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.374159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.388532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:24:37.229 [2024-12-16 05:42:17.390418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.390480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.404846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:24:37.229 [2024-12-16 05:42:17.406584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.406652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.421282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:24:37.229 [2024-12-16 05:42:17.423107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.423160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.437687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:24:37.229 [2024-12-16 05:42:17.439391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.439444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.453913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:24:37.229 [2024-12-16 05:42:17.455580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.455672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:37.229 [2024-12-16 05:42:17.470150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:24:37.229 [2024-12-16 05:42:17.471854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.229 [2024-12-16 05:42:17.471897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.488 [2024-12-16 05:42:17.487294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:24:37.488 [2024-12-16 05:42:17.489322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.488 [2024-12-16 05:42:17.489395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:37.488 [2024-12-16 05:42:17.506493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:24:37.488 [2024-12-16 05:42:17.508626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.488 [2024-12-16 05:42:17.508711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:37.488 [2024-12-16 05:42:17.526474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:24:37.488 [2024-12-16 05:42:17.528504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.488 [2024-12-16 05:42:17.528562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:37.488 [2024-12-16 05:42:17.544911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:24:37.488 [2024-12-16 05:42:17.546526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.488 [2024-12-16 05:42:17.546578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:37.488 [2024-12-16 05:42:17.561260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:24:37.488 [2024-12-16 05:42:17.562916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.488 [2024-12-16 05:42:17.562985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:37.488 [2024-12-16 05:42:17.577859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:24:37.488 [2024-12-16 05:42:17.579417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.488 [2024-12-16 05:42:17.579478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:37.488 [2024-12-16 05:42:17.594266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:24:37.488 [2024-12-16 05:42:17.595877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.488 [2024-12-16 05:42:17.595925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.612285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:24:37.489 [2024-12-16 05:42:17.613979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.489 [2024-12-16 05:42:17.614038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.632361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:24:37.489 [2024-12-16 05:42:17.634198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.489 [2024-12-16 05:42:17.634259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.650662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:24:37.489 [2024-12-16 05:42:17.652267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.489 [2024-12-16 05:42:17.652315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.668037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:24:37.489 [2024-12-16 05:42:17.669669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.489 [2024-12-16 05:42:17.669709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.685513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:24:37.489 [2024-12-16 05:42:17.687064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.489 [2024-12-16 05:42:17.687118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.702695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:24:37.489 [2024-12-16 05:42:17.704226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.489 [2024-12-16 05:42:17.704267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.720070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:24:37.489 [2024-12-16 05:42:17.721672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.489 [2024-12-16 05:42:17.721727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:37.489 [2024-12-16 05:42:17.744930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:24:37.748 [2024-12-16 05:42:17.748041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.748 [2024-12-16 05:42:17.748103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.748 [2024-12-16 05:42:17.763050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:24:37.748 [2024-12-16 05:42:17.765960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.748 [2024-12-16 05:42:17.766000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:37.748 [2024-12-16 05:42:17.780273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:24:37.748 [2024-12-16 05:42:17.783054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.748 [2024-12-16 05:42:17.783115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:37.748 [2024-12-16 05:42:17.797435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:24:37.748 [2024-12-16 05:42:17.800187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.748 [2024-12-16 05:42:17.800232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:37.748 [2024-12-16 05:42:17.814703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:24:37.748 [2024-12-16 05:42:17.817461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.748 [2024-12-16 05:42:17.817520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:37.748 [2024-12-16 05:42:17.832220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:24:37.748 [2024-12-16 05:42:17.834934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.748 [2024-12-16 05:42:17.834974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:37.748 [2024-12-16 05:42:17.849580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:24:37.748 [2024-12-16 05:42:17.852231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.748 [2024-12-16 05:42:17.852271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:37.748 [2024-12-16 05:42:17.866907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:24:37.749 [2024-12-16 05:42:17.869639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.869701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.883933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:24:37.749 [2024-12-16 05:42:17.886429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.886489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.900290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:24:37.749 [2024-12-16 05:42:17.902903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.902961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.916761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:24:37.749 [2024-12-16 05:42:17.919129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.919182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.933245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:24:37.749 [2024-12-16 05:42:17.935681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.935735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.949488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:24:37.749 [2024-12-16 05:42:17.951918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.951971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.965829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:24:37.749 [2024-12-16 05:42:17.968179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.968226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.982208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:24:37.749 [2024-12-16 05:42:17.984695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:17.984756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:37.749 [2024-12-16 05:42:17.998791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:24:37.749 [2024-12-16 05:42:18.001150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.749 [2024-12-16 05:42:18.001204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.016725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:24:38.008 [2024-12-16 05:42:18.018998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.019052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.033233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:24:38.008 [2024-12-16 05:42:18.035549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.035612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.049469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:24:38.008 [2024-12-16 05:42:18.051806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.051866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.065806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:24:38.008 [2024-12-16 05:42:18.068000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.068057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.082158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:24:38.008 [2024-12-16 05:42:18.084522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.084576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.098410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:24:38.008 [2024-12-16 05:42:18.100794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.100847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.114776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:24:38.008 [2024-12-16 05:42:18.117040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.117101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.131172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:24:38.008 [2024-12-16 05:42:18.133421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.133481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.147513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:24:38.008 [2024-12-16 05:42:18.149775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.149832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.163889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:24:38.008 [2024-12-16 05:42:18.166087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.166141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.180357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:24:38.008 [2024-12-16 05:42:18.182543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.008 [2024-12-16 05:42:18.182596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.008 [2024-12-16 05:42:18.196675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:24:38.008 [2024-12-16 05:42:18.198739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.009 [2024-12-16 05:42:18.198801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.009 [2024-12-16 05:42:18.212945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:24:38.009 [2024-12-16 05:42:18.215066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.009 [2024-12-16 05:42:18.215124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.009 [2024-12-16 05:42:18.229343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:24:38.009 [2024-12-16 05:42:18.231541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.009 [2024-12-16 05:42:18.231597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.009 [2024-12-16 05:42:18.245801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:24:38.009 [2024-12-16 05:42:18.247803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.009 [2024-12-16 05:42:18.247864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.009 [2024-12-16 05:42:18.262204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:24:38.009 [2024-12-16 05:42:18.264655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.009 [2024-12-16 05:42:18.264719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.268 [2024-12-16 05:42:18.279777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:24:38.268 [2024-12-16 05:42:18.281902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.268 [2024-12-16 05:42:18.281968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.268 [2024-12-16 05:42:18.296430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:24:38.268 [2024-12-16 05:42:18.298508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.268 [2024-12-16 05:42:18.298565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.268 [2024-12-16 05:42:18.312789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173efae0 00:24:38.268 [2024-12-16 05:42:18.314718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.268 [2024-12-16 05:42:18.314774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.268 [2024-12-16 05:42:18.329131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:24:38.268 [2024-12-16 05:42:18.331135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.268 [2024-12-16 05:42:18.331188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.268 [2024-12-16 05:42:18.345546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:24:38.268 [2024-12-16 05:42:18.347505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.268 [2024-12-16 05:42:18.347557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.268 [2024-12-16 05:42:18.361830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:24:38.268 [2024-12-16 05:42:18.363729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.268 [2024-12-16 05:42:18.363790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.268 14928.00 IOPS, 58.31 MiB/s 00:24:38.268 Latency(us) 00:24:38.268 [2024-12-16T05:42:18.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.268 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:38.268 nvme0n1 : 2.00 14960.64 58.44 0.00 0.00 8548.20 7536.64 34078.72 00:24:38.268 [2024-12-16T05:42:18.527Z] =================================================================================================================== 00:24:38.268 [2024-12-16T05:42:18.527Z] Total : 14960.64 58.44 0.00 0.00 8548.20 7536.64 34078.72 00:24:38.268 { 00:24:38.268 "results": [ 00:24:38.268 { 00:24:38.268 "job": "nvme0n1", 00:24:38.268 "core_mask": "0x2", 00:24:38.268 "workload": "randwrite", 00:24:38.268 "status": "finished", 00:24:38.268 "queue_depth": 128, 00:24:38.268 "io_size": 4096, 00:24:38.268 "runtime": 2.004193, 00:24:38.268 "iops": 14960.63502866241, 00:24:38.269 "mibps": 58.439980580712536, 00:24:38.269 "io_failed": 0, 00:24:38.269 "io_timeout": 0, 00:24:38.269 "avg_latency_us": 8548.19939846706, 00:24:38.269 "min_latency_us": 7536.64, 00:24:38.269 "max_latency_us": 34078.72 00:24:38.269 } 00:24:38.269 ], 00:24:38.269 "core_count": 1 00:24:38.269 } 00:24:38.269 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:38.269 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:38.269 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:38.269 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:38.269 | .driver_specific 00:24:38.269 | .nvme_error 00:24:38.269 | .status_code 00:24:38.269 | .command_transient_transport_error' 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88440 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88440 ']' 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88440 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88440 00:24:38.529 killing process with pid 88440 00:24:38.529 Received shutdown signal, test time was about 2.000000 seconds 00:24:38.529 00:24:38.529 Latency(us) 00:24:38.529 [2024-12-16T05:42:18.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.529 [2024-12-16T05:42:18.788Z] =================================================================================================================== 00:24:38.529 [2024-12-16T05:42:18.788Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88440' 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88440 00:24:38.529 05:42:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88440 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=88501 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 88501 /var/tmp/bperf.sock 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 88501 ']' 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:39.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.466 05:42:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:39.466 [2024-12-16 05:42:19.630521] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:39.466 [2024-12-16 05:42:19.630988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88501 ] 00:24:39.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:39.466 Zero copy mechanism will not be used. 00:24:39.725 [2024-12-16 05:42:19.810464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.725 [2024-12-16 05:42:19.891287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.984 [2024-12-16 05:42:20.041610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.552 05:42:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.810 nvme0n1 00:24:40.810 05:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:40.810 05:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.810 05:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:40.810 05:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.810 05:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:40.810 05:42:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:41.070 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:41.070 Zero copy mechanism will not be used. 00:24:41.070 Running I/O for 2 seconds... 00:24:41.070 [2024-12-16 05:42:21.175712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.176057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.176097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.182165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.182270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.182301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.187879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.187991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.188028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.193707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.193810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.193850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.199381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.199685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.199718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.205418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.205546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.205575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.211105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.211375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.211416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.217267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.217373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.217403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.223060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.223166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.223195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.229008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.229107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.229144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.234845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.234955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.234992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.240547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.240706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.240752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.246447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.246736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.246775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.252485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.252584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.252638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.258221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.258491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.258521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.264186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.264316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.264347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.269972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.270224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.270262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.275916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.276014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.276050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.281906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.282050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.282079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.287613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.287725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.287762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.293303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.293562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.293615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.299321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.299425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.299455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.305159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.070 [2024-12-16 05:42:21.305393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.070 [2024-12-16 05:42:21.305424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.070 [2024-12-16 05:42:21.311100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.071 [2024-12-16 05:42:21.311198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.071 [2024-12-16 05:42:21.311235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.071 [2024-12-16 05:42:21.316974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.071 [2024-12-16 05:42:21.317081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.071 [2024-12-16 05:42:21.317110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.071 [2024-12-16 05:42:21.322858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.071 [2024-12-16 05:42:21.322990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.071 [2024-12-16 05:42:21.323021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.329377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.329474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.329510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.335525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.335671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.335709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.341320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.341425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.341455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.347440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.347556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.347591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.353753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.353855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.353894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.359866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.360005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.360035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.366381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.366490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.366537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.372896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.373049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.373103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.379250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.379362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.379392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.385678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.385807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.385838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.391657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.391787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.391826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.397746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.397868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.397898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.403458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.403727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.403759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.409659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.409759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.409794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.415553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.415832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.415888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.331 [2024-12-16 05:42:21.421875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.331 [2024-12-16 05:42:21.421985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.331 [2024-12-16 05:42:21.422015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.427802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.427906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.427941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.434081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.434195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.434236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.440014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.440181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.440213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.446189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.446298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.446339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.452280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.452397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.452468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.458578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.458737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.458768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.464688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.464798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.464828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.470739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.470861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.470901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.476729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.476830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.476867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.482735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.482863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.482894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.488824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.488928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.488965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.494635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.494735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.494772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.500606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.500754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.500785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.506449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.506730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.506761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.512882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.512984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.513020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.518797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.518907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.518936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.524714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.524824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.524853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.530607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.530707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.530745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.536778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.536880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.536919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.542637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.542790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.548648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.548802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.548857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.554565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.554819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.554857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.560721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.560827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.560856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.566712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.566821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.566850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.572531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.572676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.572714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.578412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.578674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.578705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.332 [2024-12-16 05:42:21.585094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.332 [2024-12-16 05:42:21.585203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.332 [2024-12-16 05:42:21.585233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.592047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.592194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.592254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.599178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.599288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.599319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.606334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.606566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.613383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.613484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.613521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.620057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.620226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.620260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.626687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.626792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.626824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.633401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.633523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.633561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.639549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.639702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.639748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.645682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.645805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.645834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.651410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.651527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.651562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.657425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.657679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.657717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.663326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.663458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.663486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.669818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.669953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.670036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.677230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.677509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.677546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.683840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.683943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.683972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.689704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.689792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.689828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.695386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.695498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.695534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.593 [2024-12-16 05:42:21.701287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.593 [2024-12-16 05:42:21.701556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.593 [2024-12-16 05:42:21.701586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.707171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.707290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.707318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.712954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.713225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.713262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.719028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.719153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.719182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.724782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.724874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.724902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.730435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.730546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.730582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.736291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.736537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.736578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.742284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.742412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.742441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.748034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.748320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.748350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.754062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.754160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.754196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.759708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.759815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.759844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.765435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.765553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.765582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.771226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.771502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.771542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.777100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.777198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.777235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.782805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.782922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.782951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.788512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.788613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.788664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.794263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.794359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.794397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.799996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.800139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.800168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.805661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.805766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.805795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.811323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.811561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.811598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.817268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.817369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.817406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.822987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.823255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.823285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.829050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.829179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.829213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.834865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.834977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.835023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.840550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.840706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.840752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.594 [2024-12-16 05:42:21.846432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.594 [2024-12-16 05:42:21.846697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.594 [2024-12-16 05:42:21.846741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.853192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.853303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.853338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.859399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.859516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.859545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.865294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.865399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.865427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.871104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.871201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.871253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.876972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.877081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.877117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.882756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.882870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.882898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.888428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.888567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.888611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.894236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.855 [2024-12-16 05:42:21.894488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.855 [2024-12-16 05:42:21.894526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.855 [2024-12-16 05:42:21.900286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.900381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.900410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.906009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.906274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.906304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.911924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.912022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.912059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.917657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.917757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.917795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.923353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.923458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.923488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.929283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.929516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.929550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.935210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.935307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.935343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.940971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.941228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.941259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.946912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.947017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.947045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.952741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.952858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.952898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.958380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.958478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.958514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.964269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.964525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.964556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.970260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.970359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.970395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.976042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.976309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.976349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.982220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.982345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.982374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.988061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.988338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.988368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.994119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.994207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.994243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:21.999703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:21.999830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:21.999859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.005406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.005522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.005551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.011123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.011376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.011414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.017102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.017199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.017235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.022788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.022910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.022938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.028425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.028585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.028614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.034295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.034545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.034582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.040295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.040388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.040417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.046038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.046284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.046313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.051997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.856 [2024-12-16 05:42:22.052133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.856 [2024-12-16 05:42:22.052185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.856 [2024-12-16 05:42:22.057703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.057800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.057837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.063323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.063444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.063473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.069181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.069423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.069453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.075141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.075238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.075274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.080990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.081114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.081144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.086646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.086763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.086792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.092393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.092726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.092765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.098320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.098458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.098496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.104025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.104302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.104334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.857 [2024-12-16 05:42:22.110563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:41.857 [2024-12-16 05:42:22.110730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.857 [2024-12-16 05:42:22.110767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.116875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.116987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.117042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.122962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.123079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.123109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.128836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.128959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.128988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.134545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.134824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.134865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.140603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.140751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.140780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.146363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.146622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.146665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.152415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.152594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.152631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.158125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.158395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.158433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.164102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.117 [2024-12-16 05:42:22.164231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.117 [2024-12-16 05:42:22.164264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.117 [2024-12-16 05:42:22.169885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.170015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.170052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.118 5171.00 IOPS, 646.38 MiB/s [2024-12-16T05:42:22.377Z] [2024-12-16 05:42:22.176553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.176694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.176758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.182461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.182721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.182752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.188583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.188732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.188777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.194267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.194528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.194558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.200236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.200370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.200399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.205957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.206208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.206238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.211888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.212002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.212044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.217649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.217739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.217769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.223279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.223389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.223419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.229053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.229320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.229351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.235086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.235184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.235213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.240732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.240858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.240886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.246313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.246412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.246442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.252063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.252345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.252376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.258141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.258246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.258275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.263814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.263924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.263953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.269664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.269759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.269788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.275285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.275382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.275410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.281231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.281341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.281370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.286875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.286972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.287000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.292609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.292908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.292939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.298569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.298690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.298719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.304256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.304504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.304535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.310191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.310314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.310342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.315901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.118 [2024-12-16 05:42:22.316011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.118 [2024-12-16 05:42:22.316040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.118 [2024-12-16 05:42:22.321704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.321807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.321835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.327369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.327480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.327508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.333221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.333335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.333365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.338983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.339097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.339126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.344784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.344905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.344935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.350456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.350576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.350607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.356267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.356513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.356543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.362105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.362204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.362233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.119 [2024-12-16 05:42:22.367844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.119 [2024-12-16 05:42:22.367943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.119 [2024-12-16 05:42:22.367971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.374071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.374194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.374225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.380406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.380695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.380741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.386439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.386538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.386567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.392266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.392564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.392595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.398222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.398353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.398382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.404099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.404354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.404384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.410064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.410170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.410198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.415660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.415748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.415777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.421288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.421399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.379 [2024-12-16 05:42:22.421426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.379 [2024-12-16 05:42:22.426983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.379 [2024-12-16 05:42:22.427219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.427249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.433107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.433217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.433247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.438763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.438874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.438903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.444429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.444538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.444567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.450120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.450357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.450387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.456057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.456215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.456246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.461904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.462005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.462034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.467543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.467692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.467737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.473240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.473491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.473521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.479086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.479200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.479230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.484870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.484966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.484995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.490498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.490595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.490656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.496206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.496452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.496496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.502188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.502286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.502315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.507940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.508049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.508077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.513655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.513753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.513781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.519332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.519433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.519462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.525129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.525239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.525268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.530791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.530891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.530920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.536483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.536767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.536798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.542455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.542574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.542617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.548220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.548449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.548481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.554151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.554249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.554278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.559873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.559984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.560013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.565640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.565741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.565770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.571326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.571424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.571453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.577233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.577330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.577359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.583071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.380 [2024-12-16 05:42:22.583181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.380 [2024-12-16 05:42:22.583225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.380 [2024-12-16 05:42:22.588977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.589094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.589123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.381 [2024-12-16 05:42:22.594681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.594783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.594814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.381 [2024-12-16 05:42:22.600419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.600749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.600779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.381 [2024-12-16 05:42:22.606397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.606497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.606526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.381 [2024-12-16 05:42:22.612787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.612891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.612922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.381 [2024-12-16 05:42:22.619156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.619288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.619318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.381 [2024-12-16 05:42:22.625824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.625915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.625963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.381 [2024-12-16 05:42:22.633093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.381 [2024-12-16 05:42:22.633222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.381 [2024-12-16 05:42:22.633251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.640089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.640335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.640369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.647171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.647414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.647445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.654333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.654450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.654481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.661305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.661422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.661452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.667712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.667818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.667848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.673814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.673928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.673974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.679856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.679977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.680007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.641 [2024-12-16 05:42:22.686000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.641 [2024-12-16 05:42:22.686113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.641 [2024-12-16 05:42:22.686142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.691915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.692041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.692070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.697986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.698099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.698129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.703776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.703888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.703917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.709650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.709775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.709804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.715429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.715699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.715730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.721763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.721863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.721893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.727706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.727792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.727821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.733541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.733675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.733706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.739457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.739727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.739757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.745687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.745775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.745804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.751562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.751857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.751888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.757826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.757939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.757968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.763648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.763744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.763773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.769519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.769661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.769692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.775483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.775738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.775769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.781554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.781683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.781713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.787408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.787663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.787694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.793564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.793695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.793724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.799485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.799740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.799770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.805712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.642 [2024-12-16 05:42:22.805830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.642 [2024-12-16 05:42:22.805860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.642 [2024-12-16 05:42:22.811500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.811739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.811770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.818146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.818247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.818277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.824339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.824429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.824503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.830329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.830443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.830472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.836335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.836465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.836510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.842296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.842419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.842448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.848387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.848543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.848572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.854370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.854486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.854515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.860434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.860571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.860599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.866414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.866530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.866559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.872627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.872778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.872807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.878626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.878749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.878779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.884538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.884675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.884706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.890477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.890752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.890782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.643 [2024-12-16 05:42:22.897234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.643 [2024-12-16 05:42:22.897371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.643 [2024-12-16 05:42:22.897401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.903 [2024-12-16 05:42:22.903657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.903 [2024-12-16 05:42:22.903802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.903833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.909913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.910016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.910044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.915928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.916042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.921907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.922038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.922066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.927958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.928041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.928069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.933674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.933795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.933825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.939302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.939401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.939429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.945089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.945327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.945357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.950984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.951095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.951123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.956887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.956989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.957017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.962560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.962688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.962744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.968336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.968597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.968628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.974269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.974380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.979971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.980248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.980286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.986011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.986110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.986138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.991659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.991770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.991799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:22.997337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:22.997448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:22.997476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.003176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.003413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.003442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.009166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.009290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.009320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.014810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.014936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.014964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.020608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.020797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.026267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.026503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.026533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.032224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.032327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.032361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.037955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.038207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.038237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.043852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.043968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.043997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.049540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.049834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.904 [2024-12-16 05:42:23.049864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.904 [2024-12-16 05:42:23.055511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.904 [2024-12-16 05:42:23.055643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.055689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.061211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.061470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.061500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.067139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.067239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.067267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.072903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.073013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.073042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.078569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.078737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.078767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.084525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.084633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.084678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.090181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.090280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.090308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.095884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.096000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.096030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.101638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.101738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.101767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.107254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.107352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.107381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.113168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.113420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.113451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.119104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.119221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.119250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.124854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.124963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.124992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.130542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.130729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.130760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.136266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.136531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.136561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.142262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.142361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.142390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.147962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.148081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.148119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.153758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.153862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.153890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.905 [2024-12-16 05:42:23.159974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:42.905 [2024-12-16 05:42:23.160117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.905 [2024-12-16 05:42:23.160164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:43.164 [2024-12-16 05:42:23.166420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:43.164 [2024-12-16 05:42:23.166580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.164 [2024-12-16 05:42:23.166626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:43.164 [2024-12-16 05:42:23.172249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:24:43.164 [2024-12-16 05:42:23.172549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.165 [2024-12-16 05:42:23.172579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:43.165 5197.50 IOPS, 649.69 MiB/s 00:24:43.165 Latency(us) 00:24:43.165 [2024-12-16T05:42:23.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.165 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:43.165 nvme0n1 : 2.00 5196.92 649.61 0.00 0.00 3071.25 1980.97 7626.01 00:24:43.165 [2024-12-16T05:42:23.424Z] =================================================================================================================== 00:24:43.165 [2024-12-16T05:42:23.424Z] Total : 5196.92 649.61 0.00 0.00 3071.25 1980.97 7626.01 00:24:43.165 { 00:24:43.165 "results": [ 00:24:43.165 { 00:24:43.165 "job": "nvme0n1", 00:24:43.165 "core_mask": "0x2", 00:24:43.165 "workload": "randwrite", 00:24:43.165 "status": "finished", 00:24:43.165 "queue_depth": 16, 00:24:43.165 "io_size": 131072, 00:24:43.165 "runtime": 2.004265, 00:24:43.165 "iops": 5196.917573274991, 00:24:43.165 "mibps": 649.6146966593739, 00:24:43.165 "io_failed": 0, 00:24:43.165 "io_timeout": 0, 00:24:43.165 "avg_latency_us": 3071.2464069264065, 00:24:43.165 "min_latency_us": 1980.9745454545455, 00:24:43.165 "max_latency_us": 7626.007272727273 00:24:43.165 } 00:24:43.165 ], 00:24:43.165 "core_count": 1 00:24:43.165 } 00:24:43.165 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:43.165 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:43.165 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:43.165 | .driver_specific 00:24:43.165 | .nvme_error 00:24:43.165 | .status_code 00:24:43.165 | .command_transient_transport_error' 00:24:43.165 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 336 > 0 )) 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 88501 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88501 ']' 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88501 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88501 00:24:43.424 killing process with pid 88501 00:24:43.424 Received shutdown signal, test time was about 2.000000 seconds 00:24:43.424 00:24:43.424 Latency(us) 00:24:43.424 [2024-12-16T05:42:23.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.424 [2024-12-16T05:42:23.683Z] =================================================================================================================== 00:24:43.424 [2024-12-16T05:42:23.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88501' 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88501 00:24:43.424 05:42:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88501 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 88274 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 88274 ']' 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 88274 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88274 00:24:44.362 killing process with pid 88274 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88274' 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 88274 00:24:44.362 05:42:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 88274 00:24:44.932 00:24:44.932 real 0m21.518s 00:24:44.932 user 0m41.288s 00:24:44.932 sys 0m4.504s 00:24:44.932 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.932 ************************************ 00:24:44.932 END TEST nvmf_digest_error 00:24:44.932 ************************************ 00:24:44.932 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:44.932 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:44.932 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:44.932 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:44.932 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.191 rmmod nvme_tcp 00:24:45.191 rmmod nvme_fabrics 00:24:45.191 rmmod nvme_keyring 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 88274 ']' 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 88274 00:24:45.191 Process with pid 88274 is not found 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 88274 ']' 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 88274 00:24:45.191 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88274) - No such process 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 88274 is not found' 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:45.191 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:24:45.450 00:24:45.450 real 0m45.618s 00:24:45.450 user 1m25.961s 00:24:45.450 sys 0m9.643s 00:24:45.450 ************************************ 00:24:45.450 END TEST nvmf_digest 00:24:45.450 ************************************ 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.450 ************************************ 00:24:45.450 START TEST nvmf_host_multipath 00:24:45.450 ************************************ 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:45.450 * Looking for test storage... 00:24:45.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:24:45.450 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:45.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.710 --rc genhtml_branch_coverage=1 00:24:45.710 --rc genhtml_function_coverage=1 00:24:45.710 --rc genhtml_legend=1 00:24:45.710 --rc geninfo_all_blocks=1 00:24:45.710 --rc geninfo_unexecuted_blocks=1 00:24:45.710 00:24:45.710 ' 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:45.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.710 --rc genhtml_branch_coverage=1 00:24:45.710 --rc genhtml_function_coverage=1 00:24:45.710 --rc genhtml_legend=1 00:24:45.710 --rc geninfo_all_blocks=1 00:24:45.710 --rc geninfo_unexecuted_blocks=1 00:24:45.710 00:24:45.710 ' 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:45.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.710 --rc genhtml_branch_coverage=1 00:24:45.710 --rc genhtml_function_coverage=1 00:24:45.710 --rc genhtml_legend=1 00:24:45.710 --rc geninfo_all_blocks=1 00:24:45.710 --rc geninfo_unexecuted_blocks=1 00:24:45.710 00:24:45.710 ' 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:45.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.710 --rc genhtml_branch_coverage=1 00:24:45.710 --rc genhtml_function_coverage=1 00:24:45.710 --rc genhtml_legend=1 00:24:45.710 --rc geninfo_all_blocks=1 00:24:45.710 --rc geninfo_unexecuted_blocks=1 00:24:45.710 00:24:45.710 ' 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.710 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:45.711 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:45.712 Cannot find device "nvmf_init_br" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:45.712 Cannot find device "nvmf_init_br2" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:45.712 Cannot find device "nvmf_tgt_br" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.712 Cannot find device "nvmf_tgt_br2" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:45.712 Cannot find device "nvmf_init_br" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:45.712 Cannot find device "nvmf_init_br2" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:45.712 Cannot find device "nvmf_tgt_br" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:45.712 Cannot find device "nvmf_tgt_br2" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:45.712 Cannot find device "nvmf_br" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:45.712 Cannot find device "nvmf_init_if" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:45.712 Cannot find device "nvmf_init_if2" 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:45.712 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:45.972 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:45.972 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:45.972 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:45.972 05:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:45.972 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:45.972 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:45.972 00:24:45.972 --- 10.0.0.3 ping statistics --- 00:24:45.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.972 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:45.972 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:45.972 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:24:45.972 00:24:45.972 --- 10.0.0.4 ping statistics --- 00:24:45.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.972 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:45.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:45.972 00:24:45.972 --- 10.0.0.1 ping statistics --- 00:24:45.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.972 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:45.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:24:45.972 00:24:45.972 --- 10.0.0.2 ping statistics --- 00:24:45.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.972 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=88836 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 88836 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 88836 ']' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.972 05:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:46.231 [2024-12-16 05:42:26.332748] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:46.231 [2024-12-16 05:42:26.332921] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.491 [2024-12-16 05:42:26.515735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:46.491 [2024-12-16 05:42:26.602047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.491 [2024-12-16 05:42:26.602101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.491 [2024-12-16 05:42:26.602134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.491 [2024-12-16 05:42:26.602187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.491 [2024-12-16 05:42:26.602201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.491 [2024-12-16 05:42:26.605660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.491 [2024-12-16 05:42:26.605678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.750 [2024-12-16 05:42:26.768139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=88836 00:24:47.318 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:47.577 [2024-12-16 05:42:27.609116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.577 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:47.836 Malloc0 00:24:47.836 05:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:48.095 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.354 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:48.613 [2024-12-16 05:42:28.733572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:48.613 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:48.872 [2024-12-16 05:42:28.957648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=88892 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 88892 /var/tmp/bdevperf.sock 00:24:48.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 88892 ']' 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.872 05:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:49.856 05:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.856 05:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:49.856 05:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:50.115 05:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:50.373 Nvme0n1 00:24:50.373 05:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:50.631 Nvme0n1 00:24:50.631 05:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:24:50.631 05:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:52.009 05:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:52.009 05:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:52.009 05:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:52.268 05:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:52.268 05:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88836 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:52.268 05:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88937 00:24:52.268 05:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:58.835 Attaching 4 probes... 00:24:58.835 @path[10.0.0.3, 4421]: 16319 00:24:58.835 @path[10.0.0.3, 4421]: 16560 00:24:58.835 @path[10.0.0.3, 4421]: 16528 00:24:58.835 @path[10.0.0.3, 4421]: 16476 00:24:58.835 @path[10.0.0.3, 4421]: 16585 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88937 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:58.835 05:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:24:59.094 05:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:59.094 05:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89055 00:24:59.094 05:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88836 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:59.094 05:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:05.659 Attaching 4 probes... 00:25:05.659 @path[10.0.0.3, 4420]: 15743 00:25:05.659 @path[10.0.0.3, 4420]: 16278 00:25:05.659 @path[10.0.0.3, 4420]: 16316 00:25:05.659 @path[10.0.0.3, 4420]: 16152 00:25:05.659 @path[10.0.0.3, 4420]: 16202 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89055 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:05.659 05:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:05.918 05:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:05.918 05:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88836 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:05.918 05:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89163 00:25:05.918 05:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.483 Attaching 4 probes... 00:25:12.483 @path[10.0.0.3, 4421]: 12551 00:25:12.483 @path[10.0.0.3, 4421]: 16313 00:25:12.483 @path[10.0.0.3, 4421]: 16253 00:25:12.483 @path[10.0.0.3, 4421]: 16363 00:25:12.483 @path[10.0.0.3, 4421]: 16507 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89163 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:12.483 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:12.742 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:12.742 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:12.742 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89276 00:25:12.742 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88836 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:12.742 05:42:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:19.309 05:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:19.309 05:42:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:19.309 Attaching 4 probes... 00:25:19.309 00:25:19.309 00:25:19.309 00:25:19.309 00:25:19.309 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89276 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:19.309 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:19.568 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:19.568 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89394 00:25:19.568 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88836 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:19.568 05:42:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:26.135 05:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:26.135 05:43:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:26.135 Attaching 4 probes... 00:25:26.135 @path[10.0.0.3, 4421]: 15811 00:25:26.135 @path[10.0.0.3, 4421]: 16019 00:25:26.135 @path[10.0.0.3, 4421]: 16260 00:25:26.135 @path[10.0.0.3, 4421]: 16161 00:25:26.135 @path[10.0.0.3, 4421]: 16157 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89394 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:26.135 [2024-12-16 05:43:06.256728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:25:26.135 [2024-12-16 05:43:06.256789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:25:26.135 [2024-12-16 05:43:06.256807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:25:26.135 05:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:27.072 05:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:27.072 05:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89512 00:25:27.072 05:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88836 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:27.072 05:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:33.668 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.669 Attaching 4 probes... 00:25:33.669 @path[10.0.0.3, 4420]: 15342 00:25:33.669 @path[10.0.0.3, 4420]: 15655 00:25:33.669 @path[10.0.0.3, 4420]: 15694 00:25:33.669 @path[10.0.0.3, 4420]: 15678 00:25:33.669 @path[10.0.0.3, 4420]: 15568 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89512 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:33.669 [2024-12-16 05:43:13.819391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:33.669 05:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:33.928 05:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:25:40.496 05:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:40.496 05:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89687 00:25:40.496 05:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:40.496 05:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88836 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:47.081 Attaching 4 probes... 00:25:47.081 @path[10.0.0.3, 4421]: 15615 00:25:47.081 @path[10.0.0.3, 4421]: 15890 00:25:47.081 @path[10.0.0.3, 4421]: 16065 00:25:47.081 @path[10.0.0.3, 4421]: 16149 00:25:47.081 @path[10.0.0.3, 4421]: 15924 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89687 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 88892 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 88892 ']' 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 88892 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88892 00:25:47.081 killing process with pid 88892 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88892' 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 88892 00:25:47.081 05:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 88892 00:25:47.081 { 00:25:47.081 "results": [ 00:25:47.081 { 00:25:47.081 "job": "Nvme0n1", 00:25:47.081 "core_mask": "0x4", 00:25:47.081 "workload": "verify", 00:25:47.081 "status": "terminated", 00:25:47.081 "verify_range": { 00:25:47.081 "start": 0, 00:25:47.081 "length": 16384 00:25:47.081 }, 00:25:47.081 "queue_depth": 128, 00:25:47.081 "io_size": 4096, 00:25:47.081 "runtime": 55.48001, 00:25:47.081 "iops": 6840.0312112416705, 00:25:47.081 "mibps": 26.718871918912775, 00:25:47.081 "io_failed": 0, 00:25:47.081 "io_timeout": 0, 00:25:47.081 "avg_latency_us": 18688.452876350366, 00:25:47.081 "min_latency_us": 651.6363636363636, 00:25:47.081 "max_latency_us": 7046430.72 00:25:47.081 } 00:25:47.081 ], 00:25:47.081 "core_count": 1 00:25:47.081 } 00:25:47.081 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 88892 00:25:47.082 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:47.082 [2024-12-16 05:42:29.062193] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:47.082 [2024-12-16 05:42:29.062346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88892 ] 00:25:47.082 [2024-12-16 05:42:29.227935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.082 [2024-12-16 05:42:29.314312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.082 [2024-12-16 05:42:29.470030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:47.082 Running I/O for 90 seconds... 00:25:47.082 7395.00 IOPS, 28.89 MiB/s [2024-12-16T05:43:27.341Z] 7854.50 IOPS, 30.68 MiB/s [2024-12-16T05:43:27.341Z] 8012.33 IOPS, 31.30 MiB/s [2024-12-16T05:43:27.341Z] 8079.25 IOPS, 31.56 MiB/s [2024-12-16T05:43:27.341Z] 8123.20 IOPS, 31.73 MiB/s [2024-12-16T05:43:27.341Z] 8144.83 IOPS, 31.82 MiB/s [2024-12-16T05:43:27.341Z] 8169.86 IOPS, 31.91 MiB/s [2024-12-16T05:43:27.341Z] 8171.62 IOPS, 31.92 MiB/s [2024-12-16T05:43:27.341Z] [2024-12-16 05:42:39.256126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.256963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.256992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.082 [2024-12-16 05:42:39.257638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.257950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.257979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.258001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.258031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.258052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.258081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.258103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.258132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.258154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.258184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.258205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.258246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.082 [2024-12-16 05:42:39.258269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:47.082 [2024-12-16 05:42:39.258299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.258768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.258820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.258871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.258934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.258964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.258986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.259642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.259698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.259752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.259804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.259858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.259937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.259968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.259992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.083 [2024-12-16 05:42:39.260564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:47.083 [2024-12-16 05:42:39.260593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.083 [2024-12-16 05:42:39.260615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.260644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.260679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.260713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.260735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.260765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.260786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.260816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.260837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.260866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.260887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.260916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.260938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.260967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.260988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.261405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.261966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.261987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.262037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.262087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.262136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.262186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.262237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.084 [2024-12-16 05:42:39.262305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.262365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.262418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.262470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.262520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.262570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.262638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.262669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.084 [2024-12-16 05:42:39.262690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:47.084 [2024-12-16 05:42:39.264432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:39.264488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.264584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.264639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.264711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.264762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.264826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.264881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.264933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.264982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.265010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.265041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.265064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.265093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.265115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.265143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.265165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.265194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.265215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:39.265245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:39.265267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:47.085 8115.33 IOPS, 31.70 MiB/s [2024-12-16T05:43:27.344Z] 8103.40 IOPS, 31.65 MiB/s [2024-12-16T05:43:27.344Z] 8102.64 IOPS, 31.65 MiB/s [2024-12-16T05:43:27.344Z] 8108.67 IOPS, 31.67 MiB/s [2024-12-16T05:43:27.344Z] 8109.38 IOPS, 31.68 MiB/s [2024-12-16T05:43:27.344Z] 8107.86 IOPS, 31.67 MiB/s [2024-12-16T05:43:27.344Z] [2024-12-16 05:42:45.856991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.857482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.857886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.085 [2024-12-16 05:42:45.857921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.858253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.858312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.858363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.858414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.858464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.858514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.085 [2024-12-16 05:42:45.858565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:47.085 [2024-12-16 05:42:45.858594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.858631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.858663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.858686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.858715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.858737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.858766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.858786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.858816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.858847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.858879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.858901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.858930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.858951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.858980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.859001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.859051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.859102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.859151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.859201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.859253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.859949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.859980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.860017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.860082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.860131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.086 [2024-12-16 05:42:45.860229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.860294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.860347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.860399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.860453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.860569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:47.086 [2024-12-16 05:42:45.860600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.086 [2024-12-16 05:42:45.860621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.860662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.860687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.860717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.860737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.860766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.860787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.860816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.860837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.860866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.860886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.860915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.860936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.860965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.861010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.861062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.087 [2024-12-16 05:42:45.861876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.861925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.861954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.861975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:47.087 [2024-12-16 05:42:45.862699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.087 [2024-12-16 05:42:45.862721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.862750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.862770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.862800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.862851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.862873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.862910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.862932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.862961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.862982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.863033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.863083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.863133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.863201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:45.863933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.863964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.863986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.864032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.864054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.864084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.864105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.864135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.864200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.864233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.864257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.864288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.864335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.864367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.864390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:45.864421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.088 [2024-12-16 05:42:45.864443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:47.088 8042.00 IOPS, 31.41 MiB/s [2024-12-16T05:43:27.347Z] 7583.81 IOPS, 29.62 MiB/s [2024-12-16T05:43:27.347Z] 7622.18 IOPS, 29.77 MiB/s [2024-12-16T05:43:27.347Z] 7651.39 IOPS, 29.89 MiB/s [2024-12-16T05:43:27.347Z] 7677.32 IOPS, 29.99 MiB/s [2024-12-16T05:43:27.347Z] 7701.25 IOPS, 30.08 MiB/s [2024-12-16T05:43:27.347Z] 7725.95 IOPS, 30.18 MiB/s [2024-12-16T05:43:27.347Z] 7738.41 IOPS, 30.23 MiB/s [2024-12-16T05:43:27.347Z] [2024-12-16 05:42:52.965620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.965700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.965774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.965810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.965841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.965863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.965891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.965912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.965939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.965959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.965987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.966007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.966035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.966056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.966083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.088 [2024-12-16 05:42:52.966104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:47.088 [2024-12-16 05:42:52.966148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.966169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.966246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.966295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.966345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.966443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.966937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.966959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.089 [2024-12-16 05:42:52.967883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.967963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.967985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.968049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.968071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.968100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.968121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.968150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.968218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.968250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.968273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.968315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.968338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:47.089 [2024-12-16 05:42:52.968369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.089 [2024-12-16 05:42:52.968391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.968847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.968896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.968946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.968987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.969965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.969994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.970015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.970043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.970064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.970094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.090 [2024-12-16 05:42:52.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.970150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.970173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.970202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.090 [2024-12-16 05:42:52.970225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:47.090 [2024-12-16 05:42:52.970254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.970955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.970984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.971738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.971760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.972701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.091 [2024-12-16 05:42:52.972737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.972782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.972806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.972843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.972865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.972900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.972923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.972959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.972982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.973019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.973041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.973077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.973100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.973136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.973159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.973225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.973253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.973291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.091 [2024-12-16 05:42:52.973314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:47.091 [2024-12-16 05:42:52.973349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:42:52.973371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:42:52.973406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:42:52.973428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:42:52.973463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:42:52.973485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:42:52.973521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:42:52.973542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:42:52.973578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:42:52.973616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:42:52.973654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:42:52.973677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:42:52.973712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:42:52.973734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:47.092 7406.48 IOPS, 28.93 MiB/s [2024-12-16T05:43:27.351Z] 7097.88 IOPS, 27.73 MiB/s [2024-12-16T05:43:27.351Z] 6813.96 IOPS, 26.62 MiB/s [2024-12-16T05:43:27.351Z] 6551.88 IOPS, 25.59 MiB/s [2024-12-16T05:43:27.351Z] 6309.22 IOPS, 24.65 MiB/s [2024-12-16T05:43:27.351Z] 6083.89 IOPS, 23.77 MiB/s [2024-12-16T05:43:27.351Z] 5874.10 IOPS, 22.95 MiB/s [2024-12-16T05:43:27.351Z] 5935.27 IOPS, 23.18 MiB/s [2024-12-16T05:43:27.351Z] 6003.68 IOPS, 23.45 MiB/s [2024-12-16T05:43:27.351Z] 6069.44 IOPS, 23.71 MiB/s [2024-12-16T05:43:27.351Z] 6129.52 IOPS, 23.94 MiB/s [2024-12-16T05:43:27.351Z] 6186.76 IOPS, 24.17 MiB/s [2024-12-16T05:43:27.351Z] 6238.23 IOPS, 24.37 MiB/s [2024-12-16T05:43:27.351Z] [2024-12-16 05:43:06.256455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.256571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.256671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.256700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.256752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.256791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.256821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.256842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.256870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.256890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.256919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.256940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.256968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.256988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.092 [2024-12-16 05:43:06.257542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.257605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.257674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.257740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.257807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.257857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.257907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.257935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.257956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.092 [2024-12-16 05:43:06.258373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:47.092 [2024-12-16 05:43:06.258402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.258423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.258510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.258551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.258589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.258656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.258978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.258997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.093 [2024-12-16 05:43:06.259937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.259974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.259994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.260012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.260031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.260049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.093 [2024-12-16 05:43:06.260069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.093 [2024-12-16 05:43:06.260087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.260123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.260160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.260243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.260282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.260978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.260996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.094 [2024-12-16 05:43:06.261325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.261362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.261401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.261448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.261486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.261525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.261562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.094 [2024-12-16 05:43:06.261630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.094 [2024-12-16 05:43:06.261654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.261947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.261969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bf00 is same with the state(6) to be set 00:25:47.095 [2024-12-16 05:43:06.261993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62216 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62736 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62744 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.262829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.095 [2024-12-16 05:43:06.262861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.095 [2024-12-16 05:43:06.262877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:25:47.095 [2024-12-16 05:43:06.262896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.264406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.095 [2024-12-16 05:43:06.264529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.095 [2024-12-16 05:43:06.264575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.095 [2024-12-16 05:43:06.264623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:25:47.095 [2024-12-16 05:43:06.265121] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.095 [2024-12-16 05:43:06.265160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:25:47.095 [2024-12-16 05:43:06.265183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:25:47.095 [2024-12-16 05:43:06.265312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:25:47.095 [2024-12-16 05:43:06.265361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.095 [2024-12-16 05:43:06.265385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.095 [2024-12-16 05:43:06.265404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.095 [2024-12-16 05:43:06.265439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.095 [2024-12-16 05:43:06.265459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.095 6281.92 IOPS, 24.54 MiB/s [2024-12-16T05:43:27.354Z] 6316.24 IOPS, 24.67 MiB/s [2024-12-16T05:43:27.354Z] 6355.92 IOPS, 24.83 MiB/s [2024-12-16T05:43:27.354Z] 6393.56 IOPS, 24.97 MiB/s [2024-12-16T05:43:27.354Z] 6429.12 IOPS, 25.11 MiB/s [2024-12-16T05:43:27.354Z] 6464.51 IOPS, 25.25 MiB/s [2024-12-16T05:43:27.354Z] 6495.64 IOPS, 25.37 MiB/s [2024-12-16T05:43:27.354Z] 6522.16 IOPS, 25.48 MiB/s [2024-12-16T05:43:27.354Z] 6550.11 IOPS, 25.59 MiB/s [2024-12-16T05:43:27.354Z] 6579.31 IOPS, 25.70 MiB/s [2024-12-16T05:43:27.354Z] [2024-12-16 05:43:16.340401] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:47.095 6609.15 IOPS, 25.82 MiB/s [2024-12-16T05:43:27.354Z] 6639.60 IOPS, 25.94 MiB/s [2024-12-16T05:43:27.354Z] 6669.27 IOPS, 26.05 MiB/s [2024-12-16T05:43:27.354Z] 6697.41 IOPS, 26.16 MiB/s [2024-12-16T05:43:27.354Z] 6717.70 IOPS, 26.24 MiB/s [2024-12-16T05:43:27.354Z] 6742.69 IOPS, 26.34 MiB/s [2024-12-16T05:43:27.354Z] 6766.10 IOPS, 26.43 MiB/s [2024-12-16T05:43:27.354Z] 6789.23 IOPS, 26.52 MiB/s [2024-12-16T05:43:27.354Z] 6811.80 IOPS, 26.61 MiB/s [2024-12-16T05:43:27.354Z] 6834.56 IOPS, 26.70 MiB/s [2024-12-16T05:43:27.354Z] Received shutdown signal, test time was about 55.480807 seconds 00:25:47.095 00:25:47.095 Latency(us) 00:25:47.095 [2024-12-16T05:43:27.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.096 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:47.096 Verification LBA range: start 0x0 length 0x4000 00:25:47.096 Nvme0n1 : 55.48 6840.03 26.72 0.00 0.00 18688.45 651.64 7046430.72 00:25:47.096 [2024-12-16T05:43:27.355Z] =================================================================================================================== 00:25:47.096 [2024-12-16T05:43:27.355Z] Total : 6840.03 26.72 0.00 0.00 18688.45 651.64 7046430.72 00:25:47.096 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.354 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:47.354 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:47.354 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:25:47.354 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.354 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.613 rmmod nvme_tcp 00:25:47.613 rmmod nvme_fabrics 00:25:47.613 rmmod nvme_keyring 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 88836 ']' 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 88836 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 88836 ']' 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 88836 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88836 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:47.613 killing process with pid 88836 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88836' 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 88836 00:25:47.613 05:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 88836 00:25:48.549 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:48.550 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:48.808 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:48.808 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:48.808 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:48.808 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:48.808 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.808 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:25:48.809 00:25:48.809 real 1m3.355s 00:25:48.809 user 2m55.879s 00:25:48.809 sys 0m16.673s 00:25:48.809 ************************************ 00:25:48.809 END TEST nvmf_host_multipath 00:25:48.809 ************************************ 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.809 ************************************ 00:25:48.809 START TEST nvmf_timeout 00:25:48.809 ************************************ 00:25:48.809 05:43:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:48.809 * Looking for test storage... 00:25:48.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:48.809 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:48.809 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:25:48.809 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:49.068 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:49.068 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.068 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.069 --rc genhtml_branch_coverage=1 00:25:49.069 --rc genhtml_function_coverage=1 00:25:49.069 --rc genhtml_legend=1 00:25:49.069 --rc geninfo_all_blocks=1 00:25:49.069 --rc geninfo_unexecuted_blocks=1 00:25:49.069 00:25:49.069 ' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:49.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:49.069 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:49.070 Cannot find device "nvmf_init_br" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:49.070 Cannot find device "nvmf_init_br2" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:49.070 Cannot find device "nvmf_tgt_br" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:49.070 Cannot find device "nvmf_tgt_br2" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:49.070 Cannot find device "nvmf_init_br" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:49.070 Cannot find device "nvmf_init_br2" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:49.070 Cannot find device "nvmf_tgt_br" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:49.070 Cannot find device "nvmf_tgt_br2" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:49.070 Cannot find device "nvmf_br" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:49.070 Cannot find device "nvmf_init_if" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:49.070 Cannot find device "nvmf_init_if2" 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:49.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:49.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:49.070 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:49.329 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.330 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:49.330 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:49.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:25:49.330 00:25:49.330 --- 10.0.0.3 ping statistics --- 00:25:49.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.330 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:49.330 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:49.330 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:49.330 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:25:49.330 00:25:49.330 --- 10.0.0.4 ping statistics --- 00:25:49.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.330 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:49.330 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:49.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:25:49.330 00:25:49.330 --- 10.0.0.1 ping statistics --- 00:25:49.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.330 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:25:49.330 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:49.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:25:49.588 00:25:49.588 --- 10.0.0.2 ping statistics --- 00:25:49.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.588 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=90059 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 90059 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90059 ']' 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.589 05:43:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.589 [2024-12-16 05:43:29.744299] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:49.589 [2024-12-16 05:43:29.744472] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.848 [2024-12-16 05:43:29.928042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:49.848 [2024-12-16 05:43:30.016850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.848 [2024-12-16 05:43:30.016910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.848 [2024-12-16 05:43:30.016930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.848 [2024-12-16 05:43:30.016953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.848 [2024-12-16 05:43:30.016967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.848 [2024-12-16 05:43:30.018737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.848 [2024-12-16 05:43:30.018753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.107 [2024-12-16 05:43:30.178942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.672 05:43:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.931 [2024-12-16 05:43:31.018705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.931 05:43:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:51.189 Malloc0 00:25:51.189 05:43:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.448 05:43:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.706 05:43:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:51.964 [2024-12-16 05:43:32.012010] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=90108 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 90108 /var/tmp/bdevperf.sock 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90108 ']' 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.964 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:51.964 [2024-12-16 05:43:32.115303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:51.964 [2024-12-16 05:43:32.115438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90108 ] 00:25:52.222 [2024-12-16 05:43:32.285775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.222 [2024-12-16 05:43:32.408158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.481 [2024-12-16 05:43:32.563736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:52.739 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.739 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:52.739 05:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:52.998 05:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:53.257 NVMe0n1 00:25:53.515 05:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=90132 00:25:53.515 05:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:53.515 05:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:25:53.515 Running I/O for 10 seconds... 00:25:54.451 05:43:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:54.713 7936.00 IOPS, 31.00 MiB/s [2024-12-16T05:43:34.972Z] [2024-12-16 05:43:34.741727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:25:54.713 [2024-12-16 05:43:34.741804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:25:54.713 [2024-12-16 05:43:34.741819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:25:54.713 [2024-12-16 05:43:34.741832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:25:54.713 [2024-12-16 05:43:34.741843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:25:54.713 [2024-12-16 05:43:34.741946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.741984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.713 [2024-12-16 05:43:34.742615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.713 [2024-12-16 05:43:34.742896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.713 [2024-12-16 05:43:34.742909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.742925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.742937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.742954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.742966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.742982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.742995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.743597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.743979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.743995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.744007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.744023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.744035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.744051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.714 [2024-12-16 05:43:34.744064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.744079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.714 [2024-12-16 05:43:34.744091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.714 [2024-12-16 05:43:34.744107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.744119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.744147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.744174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.744202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.744275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.744305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.744335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:54.715 [2024-12-16 05:43:34.744975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.744991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.715 [2024-12-16 05:43:34.745364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.715 [2024-12-16 05:43:34.745376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.716 [2024-12-16 05:43:34.745408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:25:54.716 [2024-12-16 05:43:34.745439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72992 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73000 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73008 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73016 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73024 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73032 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73040 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73048 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.745938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.745950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.745962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.745990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73056 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.746019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.746043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.746070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73064 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.746084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.746107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.746118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73072 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.746132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.746156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.746167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73080 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.746183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.746207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.746218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73088 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.746232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:54.716 [2024-12-16 05:43:34.746256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:54.716 [2024-12-16 05:43:34.746268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73096 len:8 PRP1 0x0 PRP2 0x0 00:25:54.716 [2024-12-16 05:43:34.746282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.716 [2024-12-16 05:43:34.746652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.716 [2024-12-16 05:43:34.746702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.716 [2024-12-16 05:43:34.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.716 [2024-12-16 05:43:34.746761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.716 [2024-12-16 05:43:34.746774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:25:54.716 [2024-12-16 05:43:34.747022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:54.716 [2024-12-16 05:43:34.747068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:54.716 [2024-12-16 05:43:34.747208] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.716 [2024-12-16 05:43:34.747241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:25:54.716 [2024-12-16 05:43:34.747260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:25:54.716 [2024-12-16 05:43:34.747288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:54.716 [2024-12-16 05:43:34.747315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:54.716 [2024-12-16 05:43:34.760176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:54.716 [2024-12-16 05:43:34.760245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:54.716 [2024-12-16 05:43:34.760285] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:54.716 [2024-12-16 05:43:34.760303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:54.716 05:43:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:25:56.591 4505.00 IOPS, 17.60 MiB/s [2024-12-16T05:43:36.850Z] 3003.33 IOPS, 11.73 MiB/s [2024-12-16T05:43:36.850Z] [2024-12-16 05:43:36.760475] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.591 [2024-12-16 05:43:36.760563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:25:56.591 [2024-12-16 05:43:36.760587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:25:56.591 [2024-12-16 05:43:36.760664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:56.591 [2024-12-16 05:43:36.760698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:56.591 [2024-12-16 05:43:36.760713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:56.591 [2024-12-16 05:43:36.760729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:56.591 [2024-12-16 05:43:36.760745] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:56.591 [2024-12-16 05:43:36.760761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:56.591 05:43:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:25:56.591 05:43:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:56.591 05:43:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:56.851 05:43:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:56.851 05:43:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:25:56.851 05:43:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:56.851 05:43:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:57.109 05:43:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:57.109 05:43:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:25:58.744 2252.50 IOPS, 8.80 MiB/s [2024-12-16T05:43:39.003Z] 1802.00 IOPS, 7.04 MiB/s [2024-12-16T05:43:39.003Z] [2024-12-16 05:43:38.760920] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.744 [2024-12-16 05:43:38.760990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:25:58.744 [2024-12-16 05:43:38.761014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:25:58.744 [2024-12-16 05:43:38.761048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:58.744 [2024-12-16 05:43:38.761077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:58.745 [2024-12-16 05:43:38.761091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:58.745 [2024-12-16 05:43:38.761109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:58.745 [2024-12-16 05:43:38.761124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:58.745 [2024-12-16 05:43:38.761137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:00.659 1501.67 IOPS, 5.87 MiB/s [2024-12-16T05:43:40.918Z] 1287.14 IOPS, 5.03 MiB/s [2024-12-16T05:43:40.918Z] [2024-12-16 05:43:40.761196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:00.659 [2024-12-16 05:43:40.761260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:00.659 [2024-12-16 05:43:40.761275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:00.659 [2024-12-16 05:43:40.761288] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:00.659 [2024-12-16 05:43:40.761303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:01.595 1126.25 IOPS, 4.40 MiB/s 00:26:01.595 Latency(us) 00:26:01.595 [2024-12-16T05:43:41.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.595 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:01.595 Verification LBA range: start 0x0 length 0x4000 00:26:01.595 NVMe0n1 : 8.13 1108.13 4.33 15.74 0.00 113706.97 4021.53 7015926.69 00:26:01.595 [2024-12-16T05:43:41.854Z] =================================================================================================================== 00:26:01.595 [2024-12-16T05:43:41.854Z] Total : 1108.13 4.33 15.74 0.00 113706.97 4021.53 7015926.69 00:26:01.595 { 00:26:01.595 "results": [ 00:26:01.595 { 00:26:01.595 "job": "NVMe0n1", 00:26:01.595 "core_mask": "0x4", 00:26:01.595 "workload": "verify", 00:26:01.595 "status": "finished", 00:26:01.595 "verify_range": { 00:26:01.595 "start": 0, 00:26:01.595 "length": 16384 00:26:01.595 }, 00:26:01.595 "queue_depth": 128, 00:26:01.595 "io_size": 4096, 00:26:01.595 "runtime": 8.130828, 00:26:01.595 "iops": 1108.1282250712966, 00:26:01.595 "mibps": 4.328625879184752, 00:26:01.595 "io_failed": 128, 00:26:01.595 "io_timeout": 0, 00:26:01.595 "avg_latency_us": 113706.96973676357, 00:26:01.595 "min_latency_us": 4021.5272727272727, 00:26:01.595 "max_latency_us": 7015926.69090909 00:26:01.595 } 00:26:01.595 ], 00:26:01.595 "core_count": 1 00:26:01.595 } 00:26:02.163 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:02.163 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:02.163 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:02.421 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:02.421 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:02.421 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:02.421 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 90132 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 90108 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90108 ']' 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90108 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90108 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:02.680 killing process with pid 90108 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90108' 00:26:02.680 Received shutdown signal, test time was about 9.165705 seconds 00:26:02.680 00:26:02.680 Latency(us) 00:26:02.680 [2024-12-16T05:43:42.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.680 [2024-12-16T05:43:42.939Z] =================================================================================================================== 00:26:02.680 [2024-12-16T05:43:42.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.680 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90108 00:26:02.681 05:43:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90108 00:26:03.616 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:03.875 [2024-12-16 05:43:43.893831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:03.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=90256 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 90256 /var/tmp/bdevperf.sock 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90256 ']' 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.875 05:43:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:03.875 [2024-12-16 05:43:44.022262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:03.875 [2024-12-16 05:43:44.022453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90256 ] 00:26:04.134 [2024-12-16 05:43:44.201173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.134 [2024-12-16 05:43:44.288296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.393 [2024-12-16 05:43:44.443102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:04.961 05:43:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.961 05:43:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:04.962 05:43:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:04.962 05:43:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:05.219 NVMe0n1 00:26:05.477 05:43:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=90278 00:26:05.477 05:43:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:05.477 05:43:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:05.477 Running I/O for 10 seconds... 00:26:06.413 05:43:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:06.675 6550.00 IOPS, 25.59 MiB/s [2024-12-16T05:43:46.934Z] [2024-12-16 05:43:46.758362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.675 [2024-12-16 05:43:46.758912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.758923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.758943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.758954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.758967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.758978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.758993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:06.676 [2024-12-16 05:43:46.759983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.676 [2024-12-16 05:43:46.760028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.676 [2024-12-16 05:43:46.760060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.676 [2024-12-16 05:43:46.760078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.676 [2024-12-16 05:43:46.760095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.676 [2024-12-16 05:43:46.760111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.676 [2024-12-16 05:43:46.760126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.676 [2024-12-16 05:43:46.760141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.676 [2024-12-16 05:43:46.760156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.676 [2024-12-16 05:43:46.760171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.676 [2024-12-16 05:43:46.760186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.760959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.760990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.677 [2024-12-16 05:43:46.761609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.677 [2024-12-16 05:43:46.761653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.761968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.761998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.678 [2024-12-16 05:43:46.762984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.678 [2024-12-16 05:43:46.762999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.679 [2024-12-16 05:43:46.763764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.763797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.763830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.763859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.763891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.763922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.763952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.763982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.763997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.764012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.764027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.679 [2024-12-16 05:43:46.764043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.679 [2024-12-16 05:43:46.764059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.680 [2024-12-16 05:43:46.764073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.680 [2024-12-16 05:43:46.764103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.680 [2024-12-16 05:43:46.764132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.680 [2024-12-16 05:43:46.764162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.680 [2024-12-16 05:43:46.764192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.680 [2024-12-16 05:43:46.764221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.680 [2024-12-16 05:43:46.764278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:06.680 [2024-12-16 05:43:46.764315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:06.680 [2024-12-16 05:43:46.764331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:06.680 [2024-12-16 05:43:46.764348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59560 len:8 PRP1 0x0 PRP2 0x0 00:26:06.680 [2024-12-16 05:43:46.764363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.680 [2024-12-16 05:43:46.764770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.680 [2024-12-16 05:43:46.764802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.680 [2024-12-16 05:43:46.764830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.680 [2024-12-16 05:43:46.764858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.680 [2024-12-16 05:43:46.764870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:06.680 [2024-12-16 05:43:46.765141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.680 [2024-12-16 05:43:46.765186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:06.680 [2024-12-16 05:43:46.765330] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.680 [2024-12-16 05:43:46.765369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:06.680 [2024-12-16 05:43:46.765388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:06.680 [2024-12-16 05:43:46.765419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:06.680 [2024-12-16 05:43:46.765444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.680 [2024-12-16 05:43:46.765462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.680 [2024-12-16 05:43:46.765477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.680 [2024-12-16 05:43:46.765496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.680 [2024-12-16 05:43:46.765512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.680 05:43:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:07.620 3666.50 IOPS, 14.32 MiB/s [2024-12-16T05:43:47.879Z] [2024-12-16 05:43:47.778618] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.620 [2024-12-16 05:43:47.778693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:07.620 [2024-12-16 05:43:47.778714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:07.620 [2024-12-16 05:43:47.778747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:07.620 [2024-12-16 05:43:47.778773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.620 [2024-12-16 05:43:47.778794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.620 [2024-12-16 05:43:47.778808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.620 [2024-12-16 05:43:47.778826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.620 [2024-12-16 05:43:47.778840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.620 05:43:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:07.879 [2024-12-16 05:43:48.046656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:07.879 05:43:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 90278 00:26:08.705 2444.33 IOPS, 9.55 MiB/s [2024-12-16T05:43:48.964Z] [2024-12-16 05:43:48.799288] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:10.577 1833.25 IOPS, 7.16 MiB/s [2024-12-16T05:43:51.771Z] 2955.80 IOPS, 11.55 MiB/s [2024-12-16T05:43:52.705Z] 3927.17 IOPS, 15.34 MiB/s [2024-12-16T05:43:53.642Z] 4616.43 IOPS, 18.03 MiB/s [2024-12-16T05:43:55.019Z] 5127.38 IOPS, 20.03 MiB/s [2024-12-16T05:43:55.956Z] 5527.44 IOPS, 21.59 MiB/s [2024-12-16T05:43:55.956Z] 5849.10 IOPS, 22.85 MiB/s 00:26:15.697 Latency(us) 00:26:15.697 [2024-12-16T05:43:55.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.697 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:15.697 Verification LBA range: start 0x0 length 0x4000 00:26:15.697 NVMe0n1 : 10.01 5854.83 22.87 0.00 0.00 21825.01 1549.03 3035150.89 00:26:15.697 [2024-12-16T05:43:55.956Z] =================================================================================================================== 00:26:15.697 [2024-12-16T05:43:55.956Z] Total : 5854.83 22.87 0.00 0.00 21825.01 1549.03 3035150.89 00:26:15.697 { 00:26:15.697 "results": [ 00:26:15.697 { 00:26:15.697 "job": "NVMe0n1", 00:26:15.697 "core_mask": "0x4", 00:26:15.697 "workload": "verify", 00:26:15.697 "status": "finished", 00:26:15.697 "verify_range": { 00:26:15.697 "start": 0, 00:26:15.697 "length": 16384 00:26:15.697 }, 00:26:15.697 "queue_depth": 128, 00:26:15.697 "io_size": 4096, 00:26:15.697 "runtime": 10.009342, 00:26:15.697 "iops": 5854.830417424042, 00:26:15.697 "mibps": 22.870431318062664, 00:26:15.697 "io_failed": 0, 00:26:15.697 "io_timeout": 0, 00:26:15.697 "avg_latency_us": 21825.006072974855, 00:26:15.697 "min_latency_us": 1549.0327272727272, 00:26:15.697 "max_latency_us": 3035150.8945454545 00:26:15.697 } 00:26:15.697 ], 00:26:15.697 "core_count": 1 00:26:15.697 } 00:26:15.697 05:43:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=90383 00:26:15.697 05:43:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:15.697 05:43:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:15.697 Running I/O for 10 seconds... 00:26:16.633 05:43:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:16.894 6572.00 IOPS, 25.67 MiB/s [2024-12-16T05:43:57.153Z] [2024-12-16 05:43:56.929447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.929982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.929994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.930024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.930035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.930051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.930063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.930077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.930089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.930103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.930122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.930137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.930148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.930162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.930174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.894 [2024-12-16 05:43:56.930203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.894 [2024-12-16 05:43:56.930229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.930255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.930279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.930303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.930344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.930368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.930393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.930980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.930995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.931022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.931049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.895 [2024-12-16 05:43:56.931266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.895 [2024-12-16 05:43:56.931455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.895 [2024-12-16 05:43:56.931476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.931983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.931996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.896 [2024-12-16 05:43:56.932577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.896 [2024-12-16 05:43:56.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.932974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.932990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.897 [2024-12-16 05:43:56.933565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.933580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:16.897 [2024-12-16 05:43:56.933609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:16.897 [2024-12-16 05:43:56.933624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:16.897 [2024-12-16 05:43:56.933643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60432 len:8 PRP1 0x0 PRP2 0x0 00:26:16.897 [2024-12-16 05:43:56.933657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.934019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.897 [2024-12-16 05:43:56.934051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.897 [2024-12-16 05:43:56.934067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.897 [2024-12-16 05:43:56.934081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.898 [2024-12-16 05:43:56.934095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.898 [2024-12-16 05:43:56.934123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.898 [2024-12-16 05:43:56.934136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.898 [2024-12-16 05:43:56.934148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.898 [2024-12-16 05:43:56.934160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:16.898 [2024-12-16 05:43:56.934398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:16.898 [2024-12-16 05:43:56.934431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:16.898 [2024-12-16 05:43:56.934550] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.898 [2024-12-16 05:43:56.934581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:16.898 [2024-12-16 05:43:56.934613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:16.898 [2024-12-16 05:43:56.934658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:16.898 [2024-12-16 05:43:56.934684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:16.898 [2024-12-16 05:43:56.934698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:16.898 [2024-12-16 05:43:56.934712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:16.898 [2024-12-16 05:43:56.934728] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:16.898 [2024-12-16 05:43:56.934743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:16.898 05:43:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:17.834 3726.50 IOPS, 14.56 MiB/s [2024-12-16T05:43:58.094Z] [2024-12-16 05:43:57.934875] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.835 [2024-12-16 05:43:57.934943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:17.835 [2024-12-16 05:43:57.934961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:17.835 [2024-12-16 05:43:57.934989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:17.835 [2024-12-16 05:43:57.935013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:17.835 [2024-12-16 05:43:57.935026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:17.835 [2024-12-16 05:43:57.935039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:17.835 [2024-12-16 05:43:57.935054] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:17.835 [2024-12-16 05:43:57.935066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:18.771 2484.33 IOPS, 9.70 MiB/s [2024-12-16T05:43:59.030Z] [2024-12-16 05:43:58.935223] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.771 [2024-12-16 05:43:58.935301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:18.771 [2024-12-16 05:43:58.935321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:18.771 [2024-12-16 05:43:58.935353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:18.771 [2024-12-16 05:43:58.935380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:18.771 [2024-12-16 05:43:58.935393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:18.771 [2024-12-16 05:43:58.935406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:18.771 [2024-12-16 05:43:58.935421] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:18.771 [2024-12-16 05:43:58.935436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:19.707 1863.25 IOPS, 7.28 MiB/s [2024-12-16T05:43:59.966Z] [2024-12-16 05:43:59.938719] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.707 [2024-12-16 05:43:59.938794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:19.707 [2024-12-16 05:43:59.938815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:19.707 [2024-12-16 05:43:59.939053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:19.707 [2024-12-16 05:43:59.939283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:19.707 [2024-12-16 05:43:59.939301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:19.707 [2024-12-16 05:43:59.939314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:19.707 [2024-12-16 05:43:59.939330] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:19.707 [2024-12-16 05:43:59.939345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:19.707 05:43:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:19.965 [2024-12-16 05:44:00.214793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:20.223 05:44:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 90383 00:26:20.809 1490.60 IOPS, 5.82 MiB/s [2024-12-16T05:44:01.068Z] [2024-12-16 05:44:00.971696] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:26:22.733 2410.50 IOPS, 9.42 MiB/s [2024-12-16T05:44:03.928Z] 3297.00 IOPS, 12.88 MiB/s [2024-12-16T05:44:04.866Z] 3969.88 IOPS, 15.51 MiB/s [2024-12-16T05:44:05.801Z] 4487.89 IOPS, 17.53 MiB/s [2024-12-16T05:44:06.059Z] 4939.10 IOPS, 19.29 MiB/s 00:26:25.800 Latency(us) 00:26:25.800 [2024-12-16T05:44:06.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.800 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:25.800 Verification LBA range: start 0x0 length 0x4000 00:26:25.800 NVMe0n1 : 10.01 4946.02 19.32 3921.54 0.00 14403.00 718.66 3019898.88 00:26:25.800 [2024-12-16T05:44:06.059Z] =================================================================================================================== 00:26:25.800 [2024-12-16T05:44:06.059Z] Total : 4946.02 19.32 3921.54 0.00 14403.00 0.00 3019898.88 00:26:25.800 { 00:26:25.800 "results": [ 00:26:25.801 { 00:26:25.801 "job": "NVMe0n1", 00:26:25.801 "core_mask": "0x4", 00:26:25.801 "workload": "verify", 00:26:25.801 "status": "finished", 00:26:25.801 "verify_range": { 00:26:25.801 "start": 0, 00:26:25.801 "length": 16384 00:26:25.801 }, 00:26:25.801 "queue_depth": 128, 00:26:25.801 "io_size": 4096, 00:26:25.801 "runtime": 10.011895, 00:26:25.801 "iops": 4946.016713119744, 00:26:25.801 "mibps": 19.320377785624, 00:26:25.801 "io_failed": 39262, 00:26:25.801 "io_timeout": 0, 00:26:25.801 "avg_latency_us": 14402.995529059759, 00:26:25.801 "min_latency_us": 718.6618181818181, 00:26:25.801 "max_latency_us": 3019898.88 00:26:25.801 } 00:26:25.801 ], 00:26:25.801 "core_count": 1 00:26:25.801 } 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 90256 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90256 ']' 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90256 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90256 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90256' 00:26:25.801 killing process with pid 90256 00:26:25.801 Received shutdown signal, test time was about 10.000000 seconds 00:26:25.801 00:26:25.801 Latency(us) 00:26:25.801 [2024-12-16T05:44:06.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.801 [2024-12-16T05:44:06.060Z] =================================================================================================================== 00:26:25.801 [2024-12-16T05:44:06.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90256 00:26:25.801 05:44:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90256 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=90504 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 90504 /var/tmp/bdevperf.sock 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 90504 ']' 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.737 05:44:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.737 [2024-12-16 05:44:06.816761] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:26.737 [2024-12-16 05:44:06.817222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90504 ] 00:26:26.737 [2024-12-16 05:44:06.995284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.996 [2024-12-16 05:44:07.086272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.996 [2024-12-16 05:44:07.247890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:27.565 05:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.565 05:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:27.565 05:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:27.565 05:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=90520 00:26:27.565 05:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:27.825 05:44:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:28.084 NVMe0n1 00:26:28.084 05:44:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:28.084 05:44:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=90562 00:26:28.084 05:44:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:28.343 Running I/O for 10 seconds... 00:26:29.280 05:44:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:29.543 13716.00 IOPS, 53.58 MiB/s [2024-12-16T05:44:09.802Z] [2024-12-16 05:44:09.541406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.541998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.542009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.542047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.542059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.543 [2024-12-16 05:44:09.542074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.542991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.544 [2024-12-16 05:44:09.543184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.545 [2024-12-16 05:44:09.543196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.545 [2024-12-16 05:44:09.543207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.545 [2024-12-16 05:44:09.543220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.545 [2024-12-16 05:44:09.543231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.545 [2024-12-16 05:44:09.543243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.545 [2024-12-16 05:44:09.543254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:29.545 [2024-12-16 05:44:09.543346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.543978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.543993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.545 [2024-12-16 05:44:09.544725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.545 [2024-12-16 05:44:09.544743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.544774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.544807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.544841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.544872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.544904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.544936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.544967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.544982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.545970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.545985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.546002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.546016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.546034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.546048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.546065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.546 [2024-12-16 05:44:09.546080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.546 [2024-12-16 05:44:09.546097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.546970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.546989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.547 [2024-12-16 05:44:09.547260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.547 [2024-12-16 05:44:09.547276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.548 [2024-12-16 05:44:09.547743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.547760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:29.548 [2024-12-16 05:44:09.547779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.548 [2024-12-16 05:44:09.547797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.548 [2024-12-16 05:44:09.547812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49392 len:8 PRP1 0x0 PRP2 0x0 00:26:29.548 [2024-12-16 05:44:09.547827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.548163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.548 [2024-12-16 05:44:09.548193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.548214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.548 [2024-12-16 05:44:09.548229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.548244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.548 [2024-12-16 05:44:09.548258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.548302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.548 [2024-12-16 05:44:09.548319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.548 [2024-12-16 05:44:09.548347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:29.548 [2024-12-16 05:44:09.548653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:29.548 [2024-12-16 05:44:09.548719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:29.548 [2024-12-16 05:44:09.548891] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.548 [2024-12-16 05:44:09.548940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:29.548 [2024-12-16 05:44:09.548961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:29.548 [2024-12-16 05:44:09.548990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:29.548 [2024-12-16 05:44:09.549018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:29.548 [2024-12-16 05:44:09.549034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:29.548 [2024-12-16 05:44:09.549051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:29.548 [2024-12-16 05:44:09.549067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:29.548 [2024-12-16 05:44:09.549084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:29.548 05:44:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 90562 00:26:31.420 7684.50 IOPS, 30.02 MiB/s [2024-12-16T05:44:11.679Z] 5123.00 IOPS, 20.01 MiB/s [2024-12-16T05:44:11.679Z] [2024-12-16 05:44:11.562360] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.420 [2024-12-16 05:44:11.562436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:31.420 [2024-12-16 05:44:11.562462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:31.420 [2024-12-16 05:44:11.562509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:31.420 [2024-12-16 05:44:11.562542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:31.420 [2024-12-16 05:44:11.562557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:31.420 [2024-12-16 05:44:11.562577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:31.420 [2024-12-16 05:44:11.562593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:31.420 [2024-12-16 05:44:11.562622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:33.291 3842.25 IOPS, 15.01 MiB/s [2024-12-16T05:44:13.809Z] 3073.80 IOPS, 12.01 MiB/s [2024-12-16T05:44:13.809Z] [2024-12-16 05:44:13.562873] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.550 [2024-12-16 05:44:13.562950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:33.550 [2024-12-16 05:44:13.562976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:33.550 [2024-12-16 05:44:13.563010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:33.550 [2024-12-16 05:44:13.563062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:33.550 [2024-12-16 05:44:13.563082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:33.550 [2024-12-16 05:44:13.563103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:33.550 [2024-12-16 05:44:13.563120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:33.550 [2024-12-16 05:44:13.563137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:35.431 2561.50 IOPS, 10.01 MiB/s [2024-12-16T05:44:15.690Z] 2195.57 IOPS, 8.58 MiB/s [2024-12-16T05:44:15.690Z] [2024-12-16 05:44:15.563273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:35.431 [2024-12-16 05:44:15.563497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:35.431 [2024-12-16 05:44:15.563525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:35.431 [2024-12-16 05:44:15.563549] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:26:35.431 [2024-12-16 05:44:15.563569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:36.366 1921.12 IOPS, 7.50 MiB/s 00:26:36.366 Latency(us) 00:26:36.366 [2024-12-16T05:44:16.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.366 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:36.366 NVMe0n1 : 8.14 1889.10 7.38 15.73 0.00 67292.73 8817.57 7046430.72 00:26:36.366 [2024-12-16T05:44:16.625Z] =================================================================================================================== 00:26:36.366 [2024-12-16T05:44:16.625Z] Total : 1889.10 7.38 15.73 0.00 67292.73 8817.57 7046430.72 00:26:36.366 { 00:26:36.366 "results": [ 00:26:36.366 { 00:26:36.366 "job": "NVMe0n1", 00:26:36.366 "core_mask": "0x4", 00:26:36.366 "workload": "randread", 00:26:36.366 "status": "finished", 00:26:36.366 "queue_depth": 128, 00:26:36.366 "io_size": 4096, 00:26:36.366 "runtime": 8.135611, 00:26:36.366 "iops": 1889.1021215247385, 00:26:36.366 "mibps": 7.37930516220601, 00:26:36.366 "io_failed": 128, 00:26:36.366 "io_timeout": 0, 00:26:36.366 "avg_latency_us": 67292.72787436865, 00:26:36.366 "min_latency_us": 8817.57090909091, 00:26:36.366 "max_latency_us": 7046430.72 00:26:36.366 } 00:26:36.366 ], 00:26:36.366 "core_count": 1 00:26:36.366 } 00:26:36.366 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:36.366 Attaching 5 probes... 00:26:36.366 1360.927833: reset bdev controller NVMe0 00:26:36.366 1361.046392: reconnect bdev controller NVMe0 00:26:36.366 3374.491283: reconnect delay bdev controller NVMe0 00:26:36.367 3374.527104: reconnect bdev controller NVMe0 00:26:36.367 5375.004137: reconnect delay bdev controller NVMe0 00:26:36.367 5375.042194: reconnect bdev controller NVMe0 00:26:36.367 7375.506989: reconnect delay bdev controller NVMe0 00:26:36.367 7375.541924: reconnect bdev controller NVMe0 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 90520 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 90504 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90504 ']' 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90504 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90504 00:26:36.367 killing process with pid 90504 00:26:36.367 Received shutdown signal, test time was about 8.204509 seconds 00:26:36.367 00:26:36.367 Latency(us) 00:26:36.367 [2024-12-16T05:44:16.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.367 [2024-12-16T05:44:16.626Z] =================================================================================================================== 00:26:36.367 [2024-12-16T05:44:16.626Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90504' 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90504 00:26:36.367 05:44:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90504 00:26:37.304 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.563 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:37.563 rmmod nvme_tcp 00:26:37.563 rmmod nvme_fabrics 00:26:37.563 rmmod nvme_keyring 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 90059 ']' 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 90059 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 90059 ']' 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 90059 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90059 00:26:37.822 killing process with pid 90059 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90059' 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 90059 00:26:37.822 05:44:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 90059 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.760 05:44:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.020 05:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:26:39.020 ************************************ 00:26:39.020 END TEST nvmf_timeout 00:26:39.020 ************************************ 00:26:39.020 00:26:39.020 real 0m50.062s 00:26:39.020 user 2m25.573s 00:26:39.020 sys 0m5.285s 00:26:39.020 05:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.020 05:44:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:39.020 05:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:26:39.020 05:44:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:39.020 00:26:39.020 real 6m23.734s 00:26:39.020 user 17m46.987s 00:26:39.020 sys 1m16.711s 00:26:39.020 ************************************ 00:26:39.020 END TEST nvmf_host 00:26:39.020 ************************************ 00:26:39.020 05:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.020 05:44:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.020 05:44:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:39.020 05:44:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:26:39.020 ************************************ 00:26:39.020 END TEST nvmf_tcp 00:26:39.020 ************************************ 00:26:39.020 00:26:39.020 real 17m4.622s 00:26:39.020 user 44m23.006s 00:26:39.020 sys 4m7.019s 00:26:39.020 05:44:19 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.020 05:44:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.020 05:44:19 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:26:39.020 05:44:19 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:39.020 05:44:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:39.020 05:44:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.020 05:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:39.020 ************************************ 00:26:39.020 START TEST nvmf_dif 00:26:39.020 ************************************ 00:26:39.020 05:44:19 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:39.020 * Looking for test storage... 00:26:39.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:39.020 05:44:19 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:39.020 05:44:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:26:39.020 05:44:19 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.280 --rc genhtml_branch_coverage=1 00:26:39.280 --rc genhtml_function_coverage=1 00:26:39.280 --rc genhtml_legend=1 00:26:39.280 --rc geninfo_all_blocks=1 00:26:39.280 --rc geninfo_unexecuted_blocks=1 00:26:39.280 00:26:39.280 ' 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.280 --rc genhtml_branch_coverage=1 00:26:39.280 --rc genhtml_function_coverage=1 00:26:39.280 --rc genhtml_legend=1 00:26:39.280 --rc geninfo_all_blocks=1 00:26:39.280 --rc geninfo_unexecuted_blocks=1 00:26:39.280 00:26:39.280 ' 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.280 --rc genhtml_branch_coverage=1 00:26:39.280 --rc genhtml_function_coverage=1 00:26:39.280 --rc genhtml_legend=1 00:26:39.280 --rc geninfo_all_blocks=1 00:26:39.280 --rc geninfo_unexecuted_blocks=1 00:26:39.280 00:26:39.280 ' 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.280 --rc genhtml_branch_coverage=1 00:26:39.280 --rc genhtml_function_coverage=1 00:26:39.280 --rc genhtml_legend=1 00:26:39.280 --rc geninfo_all_blocks=1 00:26:39.280 --rc geninfo_unexecuted_blocks=1 00:26:39.280 00:26:39.280 ' 00:26:39.280 05:44:19 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.280 05:44:19 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.280 05:44:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.280 05:44:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.280 05:44:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.280 05:44:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:39.280 05:44:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.280 05:44:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:39.280 05:44:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:39.280 05:44:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:39.280 05:44:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:39.280 05:44:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:39.280 05:44:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:39.280 05:44:19 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:39.281 Cannot find device "nvmf_init_br" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@162 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:39.281 Cannot find device "nvmf_init_br2" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@163 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:39.281 Cannot find device "nvmf_tgt_br" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@164 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.281 Cannot find device "nvmf_tgt_br2" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@165 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:39.281 Cannot find device "nvmf_init_br" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@166 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:39.281 Cannot find device "nvmf_init_br2" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@167 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:39.281 Cannot find device "nvmf_tgt_br" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@168 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:39.281 Cannot find device "nvmf_tgt_br2" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@169 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:39.281 Cannot find device "nvmf_br" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@170 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:39.281 Cannot find device "nvmf_init_if" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@171 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:39.281 Cannot find device "nvmf_init_if2" 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@172 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@173 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@174 -- # true 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:39.281 05:44:19 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:39.540 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:39.540 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:26:39.540 00:26:39.540 --- 10.0.0.3 ping statistics --- 00:26:39.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.540 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:39.540 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:39.540 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:26:39.540 00:26:39.540 --- 10.0.0.4 ping statistics --- 00:26:39.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.540 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:39.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:26:39.540 00:26:39.540 --- 10.0.0.1 ping statistics --- 00:26:39.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.540 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:39.540 05:44:19 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:39.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:26:39.540 00:26:39.540 --- 10.0.0.2 ping statistics --- 00:26:39.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.541 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:39.541 05:44:19 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.541 05:44:19 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:26:39.541 05:44:19 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:26:39.541 05:44:19 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:40.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:40.109 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:40.109 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.109 05:44:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:40.109 05:44:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:40.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=91063 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:40.109 05:44:20 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 91063 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 91063 ']' 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.109 05:44:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:40.109 [2024-12-16 05:44:20.283250] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:40.109 [2024-12-16 05:44:20.283663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.369 [2024-12-16 05:44:20.475865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.369 [2024-12-16 05:44:20.600606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.369 [2024-12-16 05:44:20.600951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.369 [2024-12-16 05:44:20.601257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.369 [2024-12-16 05:44:20.601467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.369 [2024-12-16 05:44:20.601639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.369 [2024-12-16 05:44:20.603140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.628 [2024-12-16 05:44:20.815680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:26:41.197 05:44:21 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:41.197 05:44:21 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.197 05:44:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:41.197 05:44:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:41.197 [2024-12-16 05:44:21.288338] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.197 05:44:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.197 05:44:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:41.197 ************************************ 00:26:41.197 START TEST fio_dif_1_default 00:26:41.197 ************************************ 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:41.197 bdev_null0 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:41.197 [2024-12-16 05:44:21.336633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:41.197 { 00:26:41.197 "params": { 00:26:41.197 "name": "Nvme$subsystem", 00:26:41.197 "trtype": "$TEST_TRANSPORT", 00:26:41.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.197 "adrfam": "ipv4", 00:26:41.197 "trsvcid": "$NVMF_PORT", 00:26:41.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.197 "hdgst": ${hdgst:-false}, 00:26:41.197 "ddgst": ${ddgst:-false} 00:26:41.197 }, 00:26:41.197 "method": "bdev_nvme_attach_controller" 00:26:41.197 } 00:26:41.197 EOF 00:26:41.197 )") 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:41.197 "params": { 00:26:41.197 "name": "Nvme0", 00:26:41.197 "trtype": "tcp", 00:26:41.197 "traddr": "10.0.0.3", 00:26:41.197 "adrfam": "ipv4", 00:26:41.197 "trsvcid": "4420", 00:26:41.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:41.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:41.197 "hdgst": false, 00:26:41.197 "ddgst": false 00:26:41.197 }, 00:26:41.197 "method": "bdev_nvme_attach_controller" 00:26:41.197 }' 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:41.197 05:44:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.467 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:41.467 fio-3.35 00:26:41.467 Starting 1 thread 00:26:53.740 00:26:53.740 filename0: (groupid=0, jobs=1): err= 0: pid=91123: Mon Dec 16 05:44:32 2024 00:26:53.740 read: IOPS=7866, BW=30.7MiB/s (32.2MB/s)(307MiB/10001msec) 00:26:53.740 slat (usec): min=7, max=115, avg= 9.93, stdev= 4.44 00:26:53.740 clat (usec): min=401, max=1923, avg=478.35, stdev=45.93 00:26:53.740 lat (usec): min=408, max=1936, avg=488.28, stdev=47.11 00:26:53.740 clat percentiles (usec): 00:26:53.740 | 1.00th=[ 412], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 445], 00:26:53.740 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 482], 00:26:53.740 | 70.00th=[ 494], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 562], 00:26:53.740 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 750], 99.95th=[ 922], 00:26:53.740 | 99.99th=[ 1254] 00:26:53.740 bw ( KiB/s): min=29568, max=32480, per=99.95%, avg=31450.95, stdev=711.26, samples=19 00:26:53.740 iops : min= 7392, max= 8120, avg=7862.74, stdev=177.81, samples=19 00:26:53.740 lat (usec) : 500=75.55%, 750=24.36%, 1000=0.06% 00:26:53.740 lat (msec) : 2=0.04% 00:26:53.740 cpu : usr=86.21%, sys=11.83%, ctx=125, majf=0, minf=1061 00:26:53.740 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.740 issued rwts: total=78672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.740 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:53.740 00:26:53.740 Run status group 0 (all jobs): 00:26:53.740 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=307MiB (322MB), run=10001-10001msec 00:26:53.740 ----------------------------------------------------- 00:26:53.740 Suppressions used: 00:26:53.740 count bytes template 00:26:53.740 1 8 /usr/src/fio/parse.c 00:26:53.740 1 8 libtcmalloc_minimal.so 00:26:53.740 1 904 libcrypto.so 00:26:53.740 ----------------------------------------------------- 00:26:53.740 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 ************************************ 00:26:53.740 END TEST fio_dif_1_default 00:26:53.740 ************************************ 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 00:26:53.740 real 0m12.174s 00:26:53.740 user 0m10.373s 00:26:53.740 sys 0m1.511s 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 05:44:33 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:53.740 05:44:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:53.740 05:44:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 ************************************ 00:26:53.740 START TEST fio_dif_1_multi_subsystems 00:26:53.740 ************************************ 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 bdev_null0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 [2024-12-16 05:44:33.562357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 bdev_null1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:53.740 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:53.741 { 00:26:53.741 "params": { 00:26:53.741 "name": "Nvme$subsystem", 00:26:53.741 "trtype": "$TEST_TRANSPORT", 00:26:53.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.741 "adrfam": "ipv4", 00:26:53.741 "trsvcid": "$NVMF_PORT", 00:26:53.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.741 "hdgst": ${hdgst:-false}, 00:26:53.741 "ddgst": ${ddgst:-false} 00:26:53.741 }, 00:26:53.741 "method": "bdev_nvme_attach_controller" 00:26:53.741 } 00:26:53.741 EOF 00:26:53.741 )") 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:53.741 { 00:26:53.741 "params": { 00:26:53.741 "name": "Nvme$subsystem", 00:26:53.741 "trtype": "$TEST_TRANSPORT", 00:26:53.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:53.741 "adrfam": "ipv4", 00:26:53.741 "trsvcid": "$NVMF_PORT", 00:26:53.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:53.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:53.741 "hdgst": ${hdgst:-false}, 00:26:53.741 "ddgst": ${ddgst:-false} 00:26:53.741 }, 00:26:53.741 "method": "bdev_nvme_attach_controller" 00:26:53.741 } 00:26:53.741 EOF 00:26:53.741 )") 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:53.741 "params": { 00:26:53.741 "name": "Nvme0", 00:26:53.741 "trtype": "tcp", 00:26:53.741 "traddr": "10.0.0.3", 00:26:53.741 "adrfam": "ipv4", 00:26:53.741 "trsvcid": "4420", 00:26:53.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:53.741 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:53.741 "hdgst": false, 00:26:53.741 "ddgst": false 00:26:53.741 }, 00:26:53.741 "method": "bdev_nvme_attach_controller" 00:26:53.741 },{ 00:26:53.741 "params": { 00:26:53.741 "name": "Nvme1", 00:26:53.741 "trtype": "tcp", 00:26:53.741 "traddr": "10.0.0.3", 00:26:53.741 "adrfam": "ipv4", 00:26:53.741 "trsvcid": "4420", 00:26:53.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.741 "hdgst": false, 00:26:53.741 "ddgst": false 00:26:53.741 }, 00:26:53.741 "method": "bdev_nvme_attach_controller" 00:26:53.741 }' 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:53.741 05:44:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:53.741 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:53.741 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:53.741 fio-3.35 00:26:53.741 Starting 2 threads 00:27:05.948 00:27:05.948 filename0: (groupid=0, jobs=1): err= 0: pid=91288: Mon Dec 16 05:44:44 2024 00:27:05.948 read: IOPS=4295, BW=16.8MiB/s (17.6MB/s)(168MiB/10001msec) 00:27:05.948 slat (nsec): min=7615, max=74359, avg=14716.46, stdev=4889.20 00:27:05.948 clat (usec): min=703, max=1896, avg=889.74, stdev=68.10 00:27:05.948 lat (usec): min=711, max=1914, avg=904.46, stdev=69.62 00:27:05.948 clat percentiles (usec): 00:27:05.948 | 1.00th=[ 750], 5.00th=[ 791], 10.00th=[ 807], 20.00th=[ 840], 00:27:05.948 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 881], 60.00th=[ 898], 00:27:05.948 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 971], 95.00th=[ 1004], 00:27:05.948 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1434], 00:27:05.948 | 99.99th=[ 1729] 00:27:05.948 bw ( KiB/s): min=16864, max=17568, per=50.03%, avg=17195.79, stdev=219.44, samples=19 00:27:05.948 iops : min= 4216, max= 4392, avg=4298.95, stdev=54.86, samples=19 00:27:05.948 lat (usec) : 750=0.89%, 1000=93.33% 00:27:05.948 lat (msec) : 2=5.78% 00:27:05.948 cpu : usr=90.80%, sys=7.78%, ctx=105, majf=0, minf=1060 00:27:05.948 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.948 issued rwts: total=42964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.948 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:05.948 filename1: (groupid=0, jobs=1): err= 0: pid=91289: Mon Dec 16 05:44:44 2024 00:27:05.948 read: IOPS=4296, BW=16.8MiB/s (17.6MB/s)(168MiB/10001msec) 00:27:05.948 slat (nsec): min=5549, max=66640, avg=14902.31, stdev=5091.90 00:27:05.948 clat (usec): min=639, max=1615, avg=888.60, stdev=57.52 00:27:05.948 lat (usec): min=647, max=1638, avg=903.50, stdev=58.45 00:27:05.948 clat percentiles (usec): 00:27:05.948 | 1.00th=[ 791], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 848], 00:27:05.948 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 889], 00:27:05.948 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 996], 00:27:05.948 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1237], 00:27:05.948 | 99.99th=[ 1549] 00:27:05.948 bw ( KiB/s): min=16864, max=17568, per=50.04%, avg=17199.21, stdev=216.80, samples=19 00:27:05.949 iops : min= 4216, max= 4392, avg=4299.79, stdev=54.22, samples=19 00:27:05.949 lat (usec) : 750=0.03%, 1000=95.47% 00:27:05.949 lat (msec) : 2=4.50% 00:27:05.949 cpu : usr=90.23%, sys=8.30%, ctx=19, majf=0, minf=1075 00:27:05.949 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:05.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.949 issued rwts: total=42968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.949 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:05.949 00:27:05.949 Run status group 0 (all jobs): 00:27:05.949 READ: bw=33.6MiB/s (35.2MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=336MiB (352MB), run=10001-10001msec 00:27:05.949 ----------------------------------------------------- 00:27:05.949 Suppressions used: 00:27:05.949 count bytes template 00:27:05.949 2 16 /usr/src/fio/parse.c 00:27:05.949 1 8 libtcmalloc_minimal.so 00:27:05.949 1 904 libcrypto.so 00:27:05.949 ----------------------------------------------------- 00:27:05.949 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 ************************************ 00:27:05.949 END TEST fio_dif_1_multi_subsystems 00:27:05.949 ************************************ 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 00:27:05.949 real 0m12.302s 00:27:05.949 user 0m19.999s 00:27:05.949 sys 0m1.962s 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 05:44:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:05.949 05:44:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:05.949 05:44:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 ************************************ 00:27:05.949 START TEST fio_dif_rand_params 00:27:05.949 ************************************ 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 bdev_null0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:05.949 [2024-12-16 05:44:45.918818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:05.949 { 00:27:05.949 "params": { 00:27:05.949 "name": "Nvme$subsystem", 00:27:05.949 "trtype": "$TEST_TRANSPORT", 00:27:05.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:05.949 "adrfam": "ipv4", 00:27:05.949 "trsvcid": "$NVMF_PORT", 00:27:05.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:05.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:05.949 "hdgst": ${hdgst:-false}, 00:27:05.949 "ddgst": ${ddgst:-false} 00:27:05.949 }, 00:27:05.949 "method": "bdev_nvme_attach_controller" 00:27:05.949 } 00:27:05.949 EOF 00:27:05.949 )") 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:05.949 "params": { 00:27:05.949 "name": "Nvme0", 00:27:05.949 "trtype": "tcp", 00:27:05.949 "traddr": "10.0.0.3", 00:27:05.949 "adrfam": "ipv4", 00:27:05.949 "trsvcid": "4420", 00:27:05.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.949 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.949 "hdgst": false, 00:27:05.949 "ddgst": false 00:27:05.949 }, 00:27:05.949 "method": "bdev_nvme_attach_controller" 00:27:05.949 }' 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:05.949 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:05.950 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:05.950 05:44:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:05.950 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:05.950 ... 00:27:05.950 fio-3.35 00:27:05.950 Starting 3 threads 00:27:12.539 00:27:12.539 filename0: (groupid=0, jobs=1): err= 0: pid=91444: Mon Dec 16 05:44:51 2024 00:27:12.539 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(143MiB/5007msec) 00:27:12.539 slat (nsec): min=5620, max=68071, avg=18342.19, stdev=6644.13 00:27:12.539 clat (usec): min=9638, max=16097, avg=13097.60, stdev=500.20 00:27:12.539 lat (usec): min=9651, max=16121, avg=13115.94, stdev=501.36 00:27:12.539 clat percentiles (usec): 00:27:12.539 | 1.00th=[12649], 5.00th=[12780], 10.00th=[12780], 20.00th=[12780], 00:27:12.539 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:27:12.539 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[14091], 00:27:12.539 | 99.00th=[14877], 99.50th=[15008], 99.90th=[16057], 99.95th=[16057], 00:27:12.539 | 99.99th=[16057] 00:27:12.539 bw ( KiB/s): min=28416, max=29952, per=33.24%, avg=29104.89, stdev=706.18, samples=9 00:27:12.539 iops : min= 222, max= 234, avg=227.33, stdev= 5.57, samples=9 00:27:12.539 lat (msec) : 10=0.26%, 20=99.74% 00:27:12.539 cpu : usr=93.51%, sys=5.89%, ctx=28, majf=0, minf=1072 00:27:12.539 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.539 issued rwts: total=1143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.539 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.539 filename0: (groupid=0, jobs=1): err= 0: pid=91445: Mon Dec 16 05:44:51 2024 00:27:12.539 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(143MiB/5008msec) 00:27:12.539 slat (nsec): min=5574, max=60826, avg=18214.41, stdev=6066.03 00:27:12.539 clat (usec): min=9640, max=16116, avg=13102.38, stdev=518.47 00:27:12.539 lat (usec): min=9656, max=16151, avg=13120.59, stdev=519.44 00:27:12.539 clat percentiles (usec): 00:27:12.539 | 1.00th=[12649], 5.00th=[12649], 10.00th=[12780], 20.00th=[12780], 00:27:12.539 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:27:12.539 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[14222], 00:27:12.539 | 99.00th=[15008], 99.50th=[16057], 99.90th=[16057], 99.95th=[16057], 00:27:12.539 | 99.99th=[16057] 00:27:12.539 bw ( KiB/s): min=28416, max=29952, per=33.23%, avg=29098.67, stdev=712.67, samples=9 00:27:12.539 iops : min= 222, max= 234, avg=227.33, stdev= 5.57, samples=9 00:27:12.539 lat (msec) : 10=0.26%, 20=99.74% 00:27:12.539 cpu : usr=92.85%, sys=6.53%, ctx=46, majf=0, minf=1075 00:27:12.539 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.539 issued rwts: total=1143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.539 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.539 filename0: (groupid=0, jobs=1): err= 0: pid=91446: Mon Dec 16 05:44:51 2024 00:27:12.539 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(143MiB/5001msec) 00:27:12.539 slat (nsec): min=3911, max=59046, avg=17758.61, stdev=6458.32 00:27:12.539 clat (usec): min=12592, max=18173, avg=13117.28, stdev=534.31 00:27:12.539 lat (usec): min=12607, max=18230, avg=13135.03, stdev=535.77 00:27:12.539 clat percentiles (usec): 00:27:12.539 | 1.00th=[12649], 5.00th=[12649], 10.00th=[12780], 20.00th=[12780], 00:27:12.539 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:27:12.539 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[14222], 00:27:12.539 | 99.00th=[15008], 99.50th=[16057], 99.90th=[18220], 99.95th=[18220], 00:27:12.539 | 99.99th=[18220] 00:27:12.539 bw ( KiB/s): min=27648, max=29952, per=33.23%, avg=29098.67, stdev=712.67, samples=9 00:27:12.539 iops : min= 216, max= 234, avg=227.33, stdev= 5.57, samples=9 00:27:12.539 lat (msec) : 20=100.00% 00:27:12.539 cpu : usr=92.36%, sys=6.94%, ctx=155, majf=0, minf=1075 00:27:12.539 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.539 issued rwts: total=1140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.539 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.539 00:27:12.539 Run status group 0 (all jobs): 00:27:12.539 READ: bw=85.5MiB/s (89.7MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=428MiB (449MB), run=5001-5008msec 00:27:12.798 ----------------------------------------------------- 00:27:12.798 Suppressions used: 00:27:12.798 count bytes template 00:27:12.798 5 44 /usr/src/fio/parse.c 00:27:12.798 1 8 libtcmalloc_minimal.so 00:27:12.798 1 904 libcrypto.so 00:27:12.798 ----------------------------------------------------- 00:27:12.798 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.798 05:44:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:12.798 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.799 bdev_null0 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.799 [2024-12-16 05:44:53.036857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:12.799 bdev_null1 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.799 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:13.058 bdev_null2 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.058 { 00:27:13.058 "params": { 00:27:13.058 "name": "Nvme$subsystem", 00:27:13.058 "trtype": "$TEST_TRANSPORT", 00:27:13.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.058 "adrfam": "ipv4", 00:27:13.058 "trsvcid": "$NVMF_PORT", 00:27:13.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.058 "hdgst": ${hdgst:-false}, 00:27:13.058 "ddgst": ${ddgst:-false} 00:27:13.058 }, 00:27:13.058 "method": "bdev_nvme_attach_controller" 00:27:13.058 } 00:27:13.058 EOF 00:27:13.058 )") 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:13.058 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.058 { 00:27:13.058 "params": { 00:27:13.059 "name": "Nvme$subsystem", 00:27:13.059 "trtype": "$TEST_TRANSPORT", 00:27:13.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.059 "adrfam": "ipv4", 00:27:13.059 "trsvcid": "$NVMF_PORT", 00:27:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.059 "hdgst": ${hdgst:-false}, 00:27:13.059 "ddgst": ${ddgst:-false} 00:27:13.059 }, 00:27:13.059 "method": "bdev_nvme_attach_controller" 00:27:13.059 } 00:27:13.059 EOF 00:27:13.059 )") 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.059 { 00:27:13.059 "params": { 00:27:13.059 "name": "Nvme$subsystem", 00:27:13.059 "trtype": "$TEST_TRANSPORT", 00:27:13.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.059 "adrfam": "ipv4", 00:27:13.059 "trsvcid": "$NVMF_PORT", 00:27:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.059 "hdgst": ${hdgst:-false}, 00:27:13.059 "ddgst": ${ddgst:-false} 00:27:13.059 }, 00:27:13.059 "method": "bdev_nvme_attach_controller" 00:27:13.059 } 00:27:13.059 EOF 00:27:13.059 )") 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:13.059 "params": { 00:27:13.059 "name": "Nvme0", 00:27:13.059 "trtype": "tcp", 00:27:13.059 "traddr": "10.0.0.3", 00:27:13.059 "adrfam": "ipv4", 00:27:13.059 "trsvcid": "4420", 00:27:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:13.059 "hdgst": false, 00:27:13.059 "ddgst": false 00:27:13.059 }, 00:27:13.059 "method": "bdev_nvme_attach_controller" 00:27:13.059 },{ 00:27:13.059 "params": { 00:27:13.059 "name": "Nvme1", 00:27:13.059 "trtype": "tcp", 00:27:13.059 "traddr": "10.0.0.3", 00:27:13.059 "adrfam": "ipv4", 00:27:13.059 "trsvcid": "4420", 00:27:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:13.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.059 "hdgst": false, 00:27:13.059 "ddgst": false 00:27:13.059 }, 00:27:13.059 "method": "bdev_nvme_attach_controller" 00:27:13.059 },{ 00:27:13.059 "params": { 00:27:13.059 "name": "Nvme2", 00:27:13.059 "trtype": "tcp", 00:27:13.059 "traddr": "10.0.0.3", 00:27:13.059 "adrfam": "ipv4", 00:27:13.059 "trsvcid": "4420", 00:27:13.059 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:13.059 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:13.059 "hdgst": false, 00:27:13.059 "ddgst": false 00:27:13.059 }, 00:27:13.059 "method": "bdev_nvme_attach_controller" 00:27:13.059 }' 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:13.059 05:44:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:13.318 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:13.318 ... 00:27:13.318 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:13.318 ... 00:27:13.318 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:13.318 ... 00:27:13.318 fio-3.35 00:27:13.318 Starting 24 threads 00:27:25.523 00:27:25.523 filename0: (groupid=0, jobs=1): err= 0: pid=91551: Mon Dec 16 05:45:04 2024 00:27:25.523 read: IOPS=187, BW=750KiB/s (768kB/s)(7540KiB/10059msec) 00:27:25.523 slat (usec): min=5, max=8048, avg=37.66, stdev=386.89 00:27:25.523 clat (msec): min=29, max=182, avg=85.10, stdev=23.37 00:27:25.523 lat (msec): min=29, max=182, avg=85.14, stdev=23.38 00:27:25.523 clat percentiles (msec): 00:27:25.523 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 63], 00:27:25.523 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 95], 00:27:25.523 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 124], 00:27:25.523 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 184], 00:27:25.523 | 99.99th=[ 184] 00:27:25.523 bw ( KiB/s): min= 584, max= 1031, per=4.22%, avg=745.95, stdev=98.88, samples=19 00:27:25.523 iops : min= 146, max= 257, avg=186.42, stdev=24.60, samples=19 00:27:25.523 lat (msec) : 50=6.74%, 100=74.01%, 250=19.26% 00:27:25.524 cpu : usr=32.47%, sys=1.94%, ctx=899, majf=0, minf=1075 00:27:25.524 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.524 filename0: (groupid=0, jobs=1): err= 0: pid=91552: Mon Dec 16 05:45:04 2024 00:27:25.524 read: IOPS=185, BW=743KiB/s (761kB/s)(7496KiB/10083msec) 00:27:25.524 slat (usec): min=7, max=8037, avg=31.53, stdev=333.84 00:27:25.524 clat (msec): min=16, max=171, avg=85.71, stdev=24.01 00:27:25.524 lat (msec): min=16, max=171, avg=85.74, stdev=24.01 00:27:25.524 clat percentiles (msec): 00:27:25.524 | 1.00th=[ 28], 5.00th=[ 47], 10.00th=[ 58], 20.00th=[ 64], 00:27:25.524 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 94], 00:27:25.524 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 125], 00:27:25.524 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 171], 00:27:25.524 | 99.99th=[ 171] 00:27:25.524 bw ( KiB/s): min= 536, max= 1103, per=4.21%, avg=744.80, stdev=110.94, samples=20 00:27:25.524 iops : min= 134, max= 275, avg=186.10, stdev=27.62, samples=20 00:27:25.524 lat (msec) : 20=0.37%, 50=6.14%, 100=70.92%, 250=22.57% 00:27:25.524 cpu : usr=37.64%, sys=2.58%, ctx=1186, majf=0, minf=1075 00:27:25.524 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.524 filename0: (groupid=0, jobs=1): err= 0: pid=91553: Mon Dec 16 05:45:04 2024 00:27:25.524 read: IOPS=190, BW=764KiB/s (782kB/s)(7660KiB/10029msec) 00:27:25.524 slat (usec): min=5, max=8046, avg=37.56, stdev=377.52 00:27:25.524 clat (msec): min=31, max=155, avg=83.58, stdev=22.21 00:27:25.524 lat (msec): min=31, max=155, avg=83.61, stdev=22.21 00:27:25.524 clat percentiles (msec): 00:27:25.524 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 57], 20.00th=[ 62], 00:27:25.524 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 86], 60.00th=[ 91], 00:27:25.524 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 122], 00:27:25.524 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:27:25.524 | 99.99th=[ 157] 00:27:25.524 bw ( KiB/s): min= 600, max= 896, per=4.28%, avg=757.89, stdev=80.26, samples=19 00:27:25.524 iops : min= 150, max= 224, avg=189.47, stdev=20.06, samples=19 00:27:25.524 lat (msec) : 50=4.23%, 100=77.49%, 250=18.28% 00:27:25.524 cpu : usr=35.21%, sys=2.48%, ctx=1055, majf=0, minf=1074 00:27:25.524 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=83.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.524 filename0: (groupid=0, jobs=1): err= 0: pid=91554: Mon Dec 16 05:45:04 2024 00:27:25.524 read: IOPS=184, BW=740KiB/s (758kB/s)(7428KiB/10038msec) 00:27:25.524 slat (usec): min=5, max=8034, avg=24.18, stdev=219.47 00:27:25.524 clat (msec): min=33, max=155, avg=86.30, stdev=21.46 00:27:25.524 lat (msec): min=33, max=155, avg=86.33, stdev=21.45 00:27:25.524 clat percentiles (msec): 00:27:25.524 | 1.00th=[ 46], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 65], 00:27:25.524 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 94], 00:27:25.524 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 112], 95.00th=[ 124], 00:27:25.524 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:27:25.524 | 99.99th=[ 157] 00:27:25.524 bw ( KiB/s): min= 592, max= 816, per=4.15%, avg=734.74, stdev=62.50, samples=19 00:27:25.524 iops : min= 148, max= 204, avg=183.68, stdev=15.62, samples=19 00:27:25.524 lat (msec) : 50=3.55%, 100=77.01%, 250=19.44% 00:27:25.524 cpu : usr=33.32%, sys=2.29%, ctx=979, majf=0, minf=1072 00:27:25.524 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.524 filename0: (groupid=0, jobs=1): err= 0: pid=91555: Mon Dec 16 05:45:04 2024 00:27:25.524 read: IOPS=192, BW=771KiB/s (789kB/s)(7740KiB/10042msec) 00:27:25.524 slat (usec): min=5, max=8039, avg=32.77, stdev=297.60 00:27:25.524 clat (msec): min=26, max=156, avg=82.83, stdev=22.42 00:27:25.524 lat (msec): min=26, max=156, avg=82.86, stdev=22.42 00:27:25.524 clat percentiles (msec): 00:27:25.524 | 1.00th=[ 41], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 63], 00:27:25.524 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 86], 60.00th=[ 90], 00:27:25.524 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 111], 95.00th=[ 122], 00:27:25.524 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:27:25.524 | 99.99th=[ 157] 00:27:25.524 bw ( KiB/s): min= 616, max= 872, per=4.34%, avg=767.47, stdev=70.26, samples=19 00:27:25.524 iops : min= 154, max= 218, avg=191.84, stdev=17.56, samples=19 00:27:25.524 lat (msec) : 50=5.84%, 100=75.66%, 250=18.50% 00:27:25.524 cpu : usr=41.38%, sys=2.55%, ctx=1302, majf=0, minf=1074 00:27:25.524 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.524 filename0: (groupid=0, jobs=1): err= 0: pid=91556: Mon Dec 16 05:45:04 2024 00:27:25.524 read: IOPS=178, BW=715KiB/s (732kB/s)(7196KiB/10068msec) 00:27:25.524 slat (usec): min=4, max=8036, avg=21.57, stdev=189.18 00:27:25.524 clat (msec): min=7, max=155, avg=89.22, stdev=27.76 00:27:25.524 lat (msec): min=7, max=155, avg=89.24, stdev=27.76 00:27:25.524 clat percentiles (msec): 00:27:25.524 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 61], 20.00th=[ 69], 00:27:25.524 | 30.00th=[ 82], 40.00th=[ 88], 50.00th=[ 93], 60.00th=[ 96], 00:27:25.524 | 70.00th=[ 101], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 132], 00:27:25.524 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:27:25.524 | 99.99th=[ 157] 00:27:25.524 bw ( KiB/s): min= 512, max= 1408, per=4.05%, avg=715.40, stdev=177.57, samples=20 00:27:25.524 iops : min= 128, max= 352, avg=178.80, stdev=44.40, samples=20 00:27:25.524 lat (msec) : 10=1.67%, 20=0.11%, 50=5.89%, 100=63.20%, 250=29.13% 00:27:25.524 cpu : usr=37.17%, sys=2.50%, ctx=1128, majf=0, minf=1074 00:27:25.524 IO depths : 1=0.1%, 2=2.2%, 4=8.8%, 8=73.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=89.8%, 8=8.3%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.524 filename0: (groupid=0, jobs=1): err= 0: pid=91557: Mon Dec 16 05:45:04 2024 00:27:25.524 read: IOPS=183, BW=734KiB/s (752kB/s)(7368KiB/10039msec) 00:27:25.524 slat (usec): min=5, max=8048, avg=44.82, stdev=457.17 00:27:25.524 clat (msec): min=31, max=180, avg=86.86, stdev=22.85 00:27:25.524 lat (msec): min=31, max=180, avg=86.91, stdev=22.85 00:27:25.524 clat percentiles (msec): 00:27:25.524 | 1.00th=[ 32], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 64], 00:27:25.524 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 96], 00:27:25.524 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 122], 00:27:25.524 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:27:25.524 | 99.99th=[ 180] 00:27:25.524 bw ( KiB/s): min= 544, max= 896, per=4.13%, avg=730.84, stdev=81.29, samples=19 00:27:25.524 iops : min= 136, max= 224, avg=182.68, stdev=20.30, samples=19 00:27:25.524 lat (msec) : 50=4.45%, 100=74.59%, 250=20.96% 00:27:25.524 cpu : usr=31.60%, sys=2.11%, ctx=866, majf=0, minf=1074 00:27:25.524 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=76.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.524 filename0: (groupid=0, jobs=1): err= 0: pid=91558: Mon Dec 16 05:45:04 2024 00:27:25.524 read: IOPS=177, BW=711KiB/s (728kB/s)(7124KiB/10020msec) 00:27:25.524 slat (usec): min=5, max=3331, avg=19.23, stdev=78.79 00:27:25.524 clat (msec): min=21, max=181, avg=89.87, stdev=22.55 00:27:25.524 lat (msec): min=21, max=181, avg=89.89, stdev=22.55 00:27:25.524 clat percentiles (msec): 00:27:25.524 | 1.00th=[ 44], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 68], 00:27:25.524 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 95], 00:27:25.524 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 131], 00:27:25.524 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 182], 00:27:25.524 | 99.99th=[ 182] 00:27:25.524 bw ( KiB/s): min= 512, max= 824, per=3.97%, avg=701.37, stdev=95.14, samples=19 00:27:25.524 iops : min= 128, max= 206, avg=175.32, stdev=23.77, samples=19 00:27:25.524 lat (msec) : 50=1.91%, 100=71.08%, 250=27.01% 00:27:25.524 cpu : usr=36.99%, sys=2.58%, ctx=1468, majf=0, minf=1073 00:27:25.524 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.5%, 16=14.6%, 32=0.0%, >=64=0.0% 00:27:25.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 complete : 0=0.0%, 4=89.3%, 8=8.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.524 issued rwts: total=1781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91559: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=190, BW=762KiB/s (780kB/s)(7620KiB/10006msec) 00:27:25.525 slat (usec): min=6, max=9034, avg=32.38, stdev=331.71 00:27:25.525 clat (msec): min=12, max=155, avg=83.90, stdev=22.35 00:27:25.525 lat (msec): min=12, max=155, avg=83.93, stdev=22.36 00:27:25.525 clat percentiles (msec): 00:27:25.525 | 1.00th=[ 37], 5.00th=[ 51], 10.00th=[ 60], 20.00th=[ 61], 00:27:25.525 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 93], 00:27:25.525 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 121], 00:27:25.525 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:27:25.525 | 99.99th=[ 157] 00:27:25.525 bw ( KiB/s): min= 616, max= 920, per=4.24%, avg=750.63, stdev=74.14, samples=19 00:27:25.525 iops : min= 154, max= 230, avg=187.63, stdev=18.51, samples=19 00:27:25.525 lat (msec) : 20=0.37%, 50=4.67%, 100=77.59%, 250=17.38% 00:27:25.525 cpu : usr=30.99%, sys=2.24%, ctx=855, majf=0, minf=1061 00:27:25.525 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:25.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91560: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=199, BW=797KiB/s (817kB/s)(7984KiB/10013msec) 00:27:25.525 slat (usec): min=5, max=4030, avg=21.07, stdev=114.17 00:27:25.525 clat (usec): min=1806, max=183210, avg=80158.23, stdev=27959.28 00:27:25.525 lat (usec): min=1815, max=183229, avg=80179.30, stdev=27959.32 00:27:25.525 clat percentiles (usec): 00:27:25.525 | 1.00th=[ 1958], 5.00th=[ 35914], 10.00th=[ 49546], 20.00th=[ 60031], 00:27:25.525 | 30.00th=[ 65274], 40.00th=[ 71828], 50.00th=[ 84411], 60.00th=[ 89654], 00:27:25.525 | 70.00th=[ 94897], 80.00th=[ 99091], 90.00th=[110625], 95.00th=[125305], 00:27:25.525 | 99.00th=[152044], 99.50th=[152044], 99.90th=[183501], 99.95th=[183501], 00:27:25.525 | 99.99th=[183501] 00:27:25.525 bw ( KiB/s): min= 576, max= 872, per=4.28%, avg=756.53, stdev=76.17, samples=19 00:27:25.525 iops : min= 144, max= 218, avg=189.11, stdev=19.04, samples=19 00:27:25.525 lat (msec) : 2=1.30%, 4=1.90%, 10=0.20%, 20=0.80%, 50=6.16% 00:27:25.525 lat (msec) : 100=70.69%, 250=18.94% 00:27:25.525 cpu : usr=41.20%, sys=2.71%, ctx=1227, majf=0, minf=1074 00:27:25.525 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:25.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91561: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=176, BW=708KiB/s (725kB/s)(7116KiB/10055msec) 00:27:25.525 slat (usec): min=4, max=8037, avg=23.97, stdev=212.78 00:27:25.525 clat (msec): min=30, max=179, avg=90.21, stdev=23.25 00:27:25.525 lat (msec): min=30, max=179, avg=90.24, stdev=23.25 00:27:25.525 clat percentiles (msec): 00:27:25.525 | 1.00th=[ 35], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 70], 00:27:25.525 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 96], 00:27:25.525 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 131], 00:27:25.525 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 180], 00:27:25.525 | 99.99th=[ 180] 00:27:25.525 bw ( KiB/s): min= 624, max= 1008, per=3.97%, avg=701.79, stdev=92.58, samples=19 00:27:25.525 iops : min= 156, max= 252, avg=175.42, stdev=23.11, samples=19 00:27:25.525 lat (msec) : 50=3.82%, 100=68.13%, 250=28.05% 00:27:25.525 cpu : usr=35.71%, sys=2.44%, ctx=1137, majf=0, minf=1071 00:27:25.525 IO depths : 1=0.1%, 2=2.3%, 4=9.2%, 8=73.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:25.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 complete : 0=0.0%, 4=89.6%, 8=8.4%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91562: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=189, BW=757KiB/s (775kB/s)(7612KiB/10053msec) 00:27:25.525 slat (usec): min=5, max=8033, avg=40.46, stdev=359.28 00:27:25.525 clat (msec): min=26, max=152, avg=84.17, stdev=24.12 00:27:25.525 lat (msec): min=26, max=152, avg=84.21, stdev=24.12 00:27:25.525 clat percentiles (msec): 00:27:25.525 | 1.00th=[ 30], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:27:25.525 | 30.00th=[ 67], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 93], 00:27:25.525 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 114], 95.00th=[ 128], 00:27:25.525 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 153], 99.95th=[ 153], 00:27:25.525 | 99.99th=[ 153] 00:27:25.525 bw ( KiB/s): min= 592, max= 1008, per=4.28%, avg=756.90, stdev=81.74, samples=20 00:27:25.525 iops : min= 148, max= 252, avg=189.20, stdev=20.44, samples=20 00:27:25.525 lat (msec) : 50=7.30%, 100=70.63%, 250=22.07% 00:27:25.525 cpu : usr=41.33%, sys=2.58%, ctx=1276, majf=0, minf=1074 00:27:25.525 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=81.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:25.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91563: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=190, BW=761KiB/s (779kB/s)(7620KiB/10013msec) 00:27:25.525 slat (usec): min=4, max=8034, avg=27.97, stdev=259.68 00:27:25.525 clat (msec): min=12, max=155, avg=83.98, stdev=22.52 00:27:25.525 lat (msec): min=12, max=155, avg=84.01, stdev=22.53 00:27:25.525 clat percentiles (msec): 00:27:25.525 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 62], 00:27:25.525 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 93], 00:27:25.525 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 121], 00:27:25.525 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:27:25.525 | 99.99th=[ 157] 00:27:25.525 bw ( KiB/s): min= 616, max= 864, per=4.24%, avg=749.05, stdev=60.41, samples=19 00:27:25.525 iops : min= 154, max= 216, avg=187.21, stdev=15.11, samples=19 00:27:25.525 lat (msec) : 20=0.31%, 50=4.88%, 100=77.43%, 250=17.38% 00:27:25.525 cpu : usr=31.88%, sys=2.20%, ctx=858, majf=0, minf=1072 00:27:25.525 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:27:25.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91564: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=180, BW=721KiB/s (739kB/s)(7252KiB/10055msec) 00:27:25.525 slat (usec): min=5, max=8048, avg=51.19, stdev=506.35 00:27:25.525 clat (msec): min=31, max=180, avg=88.33, stdev=22.99 00:27:25.525 lat (msec): min=31, max=180, avg=88.39, stdev=23.00 00:27:25.525 clat percentiles (msec): 00:27:25.525 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 70], 00:27:25.525 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 96], 00:27:25.525 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 131], 00:27:25.525 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 180], 00:27:25.525 | 99.99th=[ 180] 00:27:25.525 bw ( KiB/s): min= 560, max= 1008, per=4.08%, avg=721.30, stdev=92.67, samples=20 00:27:25.525 iops : min= 140, max= 252, avg=180.30, stdev=23.17, samples=20 00:27:25.525 lat (msec) : 50=5.96%, 100=72.42%, 250=21.62% 00:27:25.525 cpu : usr=32.02%, sys=1.86%, ctx=868, majf=0, minf=1073 00:27:25.525 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:27:25.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91565: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=188, BW=752KiB/s (771kB/s)(7560KiB/10047msec) 00:27:25.525 slat (usec): min=5, max=8042, avg=33.97, stdev=303.46 00:27:25.525 clat (msec): min=35, max=157, avg=84.73, stdev=22.44 00:27:25.525 lat (msec): min=35, max=157, avg=84.76, stdev=22.44 00:27:25.525 clat percentiles (msec): 00:27:25.525 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 63], 00:27:25.525 | 30.00th=[ 68], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 93], 00:27:25.525 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 111], 95.00th=[ 122], 00:27:25.525 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:27:25.525 | 99.99th=[ 159] 00:27:25.525 bw ( KiB/s): min= 616, max= 824, per=4.23%, avg=748.53, stdev=57.41, samples=19 00:27:25.525 iops : min= 154, max= 206, avg=187.11, stdev=14.36, samples=19 00:27:25.525 lat (msec) : 50=4.18%, 100=75.08%, 250=20.74% 00:27:25.525 cpu : usr=38.71%, sys=2.96%, ctx=1192, majf=0, minf=1071 00:27:25.525 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:25.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.525 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.525 filename1: (groupid=0, jobs=1): err= 0: pid=91566: Mon Dec 16 05:45:04 2024 00:27:25.525 read: IOPS=181, BW=728KiB/s (745kB/s)(7324KiB/10064msec) 00:27:25.525 slat (usec): min=5, max=8036, avg=26.26, stdev=264.96 00:27:25.525 clat (msec): min=5, max=179, avg=87.66, stdev=29.69 00:27:25.525 lat (msec): min=5, max=179, avg=87.68, stdev=29.69 00:27:25.525 clat percentiles (msec): 00:27:25.525 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 48], 20.00th=[ 70], 00:27:25.525 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 96], 00:27:25.525 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:27:25.525 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 180], 00:27:25.525 | 99.99th=[ 180] 00:27:25.525 bw ( KiB/s): min= 576, max= 1792, per=4.12%, avg=728.05, stdev=256.99, samples=20 00:27:25.526 iops : min= 144, max= 448, avg=181.95, stdev=64.25, samples=20 00:27:25.526 lat (msec) : 10=3.28%, 20=1.20%, 50=5.68%, 100=61.88%, 250=27.96% 00:27:25.526 cpu : usr=32.28%, sys=2.23%, ctx=873, majf=0, minf=1075 00:27:25.526 IO depths : 1=0.2%, 2=2.8%, 4=10.4%, 8=71.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=90.2%, 8=7.5%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91567: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=192, BW=769KiB/s (788kB/s)(7720KiB/10035msec) 00:27:25.526 slat (usec): min=5, max=5033, avg=22.88, stdev=147.93 00:27:25.526 clat (msec): min=31, max=157, avg=82.96, stdev=22.66 00:27:25.526 lat (msec): min=31, max=157, avg=82.98, stdev=22.67 00:27:25.526 clat percentiles (msec): 00:27:25.526 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 63], 00:27:25.526 | 30.00th=[ 67], 40.00th=[ 75], 50.00th=[ 86], 60.00th=[ 91], 00:27:25.526 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 121], 00:27:25.526 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:27:25.526 | 99.99th=[ 159] 00:27:25.526 bw ( KiB/s): min= 616, max= 953, per=4.34%, avg=767.95, stdev=77.10, samples=19 00:27:25.526 iops : min= 154, max= 238, avg=191.95, stdev=19.25, samples=19 00:27:25.526 lat (msec) : 50=6.17%, 100=73.42%, 250=20.41% 00:27:25.526 cpu : usr=39.66%, sys=2.80%, ctx=1488, majf=0, minf=1071 00:27:25.526 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91568: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=193, BW=772KiB/s (791kB/s)(7752KiB/10036msec) 00:27:25.526 slat (usec): min=5, max=10669, avg=30.83, stdev=319.47 00:27:25.526 clat (msec): min=30, max=154, avg=82.65, stdev=21.84 00:27:25.526 lat (msec): min=30, max=154, avg=82.68, stdev=21.83 00:27:25.526 clat percentiles (msec): 00:27:25.526 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 63], 00:27:25.526 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 90], 00:27:25.526 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 120], 00:27:25.526 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:27:25.526 | 99.99th=[ 155] 00:27:25.526 bw ( KiB/s): min= 616, max= 872, per=4.34%, avg=767.21, stdev=63.98, samples=19 00:27:25.526 iops : min= 154, max= 218, avg=191.79, stdev=16.00, samples=19 00:27:25.526 lat (msec) : 50=4.23%, 100=78.33%, 250=17.44% 00:27:25.526 cpu : usr=40.03%, sys=2.69%, ctx=1506, majf=0, minf=1072 00:27:25.526 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91569: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=179, BW=717KiB/s (734kB/s)(7208KiB/10053msec) 00:27:25.526 slat (usec): min=5, max=8039, avg=40.24, stdev=421.92 00:27:25.526 clat (msec): min=38, max=170, avg=89.00, stdev=21.36 00:27:25.526 lat (msec): min=38, max=178, avg=89.04, stdev=21.39 00:27:25.526 clat percentiles (msec): 00:27:25.526 | 1.00th=[ 43], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 71], 00:27:25.526 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 90], 60.00th=[ 95], 00:27:25.526 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 128], 00:27:25.526 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 171], 00:27:25.526 | 99.99th=[ 171] 00:27:25.526 bw ( KiB/s): min= 576, max= 792, per=4.02%, avg=711.63, stdev=68.68, samples=19 00:27:25.526 iops : min= 144, max= 198, avg=177.89, stdev=17.16, samples=19 00:27:25.526 lat (msec) : 50=2.22%, 100=75.31%, 250=22.48% 00:27:25.526 cpu : usr=34.50%, sys=2.18%, ctx=1137, majf=0, minf=1074 00:27:25.526 IO depths : 1=0.1%, 2=1.9%, 4=7.3%, 8=75.7%, 16=15.0%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91570: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=180, BW=721KiB/s (738kB/s)(7244KiB/10048msec) 00:27:25.526 slat (usec): min=5, max=8041, avg=31.98, stdev=326.27 00:27:25.526 clat (msec): min=32, max=155, avg=88.54, stdev=22.40 00:27:25.526 lat (msec): min=32, max=155, avg=88.57, stdev=22.40 00:27:25.526 clat percentiles (msec): 00:27:25.526 | 1.00th=[ 42], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 69], 00:27:25.526 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 96], 00:27:25.526 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 131], 00:27:25.526 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:27:25.526 | 99.99th=[ 157] 00:27:25.526 bw ( KiB/s): min= 512, max= 880, per=4.03%, avg=712.74, stdev=80.94, samples=19 00:27:25.526 iops : min= 128, max= 220, avg=178.16, stdev=20.23, samples=19 00:27:25.526 lat (msec) : 50=3.26%, 100=75.10%, 250=21.65% 00:27:25.526 cpu : usr=32.05%, sys=2.01%, ctx=874, majf=0, minf=1073 00:27:25.526 IO depths : 1=0.1%, 2=2.0%, 4=7.9%, 8=75.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=89.2%, 8=9.1%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91571: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=163, BW=652KiB/s (668kB/s)(6536KiB/10023msec) 00:27:25.526 slat (nsec): min=5456, max=42471, avg=16611.20, stdev=6241.86 00:27:25.526 clat (msec): min=32, max=180, avg=97.96, stdev=23.04 00:27:25.526 lat (msec): min=33, max=180, avg=97.98, stdev=23.03 00:27:25.526 clat percentiles (msec): 00:27:25.526 | 1.00th=[ 55], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 84], 00:27:25.526 | 30.00th=[ 86], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 101], 00:27:25.526 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 129], 95.00th=[ 140], 00:27:25.526 | 99.00th=[ 165], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 182], 00:27:25.526 | 99.99th=[ 182] 00:27:25.526 bw ( KiB/s): min= 512, max= 824, per=3.62%, avg=639.47, stdev=88.66, samples=19 00:27:25.526 iops : min= 128, max= 206, avg=159.84, stdev=22.17, samples=19 00:27:25.526 lat (msec) : 50=0.80%, 100=59.98%, 250=39.23% 00:27:25.526 cpu : usr=37.61%, sys=2.78%, ctx=1104, majf=0, minf=1074 00:27:25.526 IO depths : 1=0.1%, 2=4.3%, 4=17.3%, 8=64.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=92.0%, 8=4.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91572: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=177, BW=710KiB/s (727kB/s)(7148KiB/10070msec) 00:27:25.526 slat (usec): min=4, max=7055, avg=26.98, stdev=248.63 00:27:25.526 clat (msec): min=3, max=184, avg=89.83, stdev=31.22 00:27:25.526 lat (msec): min=3, max=184, avg=89.86, stdev=31.23 00:27:25.526 clat percentiles (msec): 00:27:25.526 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 54], 20.00th=[ 71], 00:27:25.526 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 93], 60.00th=[ 96], 00:27:25.526 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 126], 95.00th=[ 144], 00:27:25.526 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 184], 00:27:25.526 | 99.99th=[ 184] 00:27:25.526 bw ( KiB/s): min= 512, max= 1777, per=4.01%, avg=709.75, stdev=264.72, samples=20 00:27:25.526 iops : min= 128, max= 444, avg=177.40, stdev=66.13, samples=20 00:27:25.526 lat (msec) : 4=0.56%, 10=3.02%, 20=1.79%, 50=4.36%, 100=58.42% 00:27:25.526 lat (msec) : 250=31.84% 00:27:25.526 cpu : usr=38.39%, sys=2.81%, ctx=1416, majf=0, minf=1073 00:27:25.526 IO depths : 1=0.1%, 2=4.1%, 4=16.3%, 8=65.8%, 16=13.7%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=91.8%, 8=4.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91573: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=172, BW=688KiB/s (705kB/s)(6920KiB/10056msec) 00:27:25.526 slat (usec): min=4, max=8034, avg=25.23, stdev=227.42 00:27:25.526 clat (msec): min=30, max=157, avg=92.77, stdev=22.93 00:27:25.526 lat (msec): min=30, max=157, avg=92.80, stdev=22.93 00:27:25.526 clat percentiles (msec): 00:27:25.526 | 1.00th=[ 34], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 78], 00:27:25.526 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 96], 00:27:25.526 | 70.00th=[ 101], 80.00th=[ 110], 90.00th=[ 126], 95.00th=[ 132], 00:27:25.526 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 159], 00:27:25.526 | 99.99th=[ 159] 00:27:25.526 bw ( KiB/s): min= 512, max= 1008, per=3.87%, avg=683.68, stdev=120.41, samples=19 00:27:25.526 iops : min= 128, max= 252, avg=170.89, stdev=30.11, samples=19 00:27:25.526 lat (msec) : 50=3.47%, 100=65.32%, 250=31.21% 00:27:25.526 cpu : usr=40.01%, sys=2.87%, ctx=1419, majf=0, minf=1073 00:27:25.526 IO depths : 1=0.1%, 2=3.8%, 4=15.0%, 8=67.5%, 16=13.8%, 32=0.0%, >=64=0.0% 00:27:25.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 complete : 0=0.0%, 4=91.3%, 8=5.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.526 issued rwts: total=1730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.526 filename2: (groupid=0, jobs=1): err= 0: pid=91574: Mon Dec 16 05:45:04 2024 00:27:25.526 read: IOPS=198, BW=793KiB/s (812kB/s)(7996KiB/10082msec) 00:27:25.526 slat (usec): min=5, max=4039, avg=19.21, stdev=91.98 00:27:25.526 clat (msec): min=2, max=160, avg=80.43, stdev=35.10 00:27:25.527 lat (msec): min=2, max=160, avg=80.45, stdev=35.10 00:27:25.527 clat percentiles (msec): 00:27:25.527 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 61], 00:27:25.527 | 30.00th=[ 72], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 95], 00:27:25.527 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 115], 95.00th=[ 126], 00:27:25.527 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 161], 00:27:25.527 | 99.99th=[ 161] 00:27:25.527 bw ( KiB/s): min= 512, max= 2793, per=4.48%, avg=791.95, stdev=476.08, samples=20 00:27:25.527 iops : min= 128, max= 698, avg=197.95, stdev=118.97, samples=20 00:27:25.527 lat (msec) : 4=4.00%, 10=6.80%, 20=1.10%, 50=4.55%, 100=58.28% 00:27:25.527 lat (msec) : 250=25.26% 00:27:25.527 cpu : usr=42.59%, sys=2.81%, ctx=1364, majf=0, minf=1073 00:27:25.527 IO depths : 1=0.6%, 2=2.9%, 4=9.2%, 8=72.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:25.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.527 complete : 0=0.0%, 4=90.0%, 8=8.0%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.527 issued rwts: total=1999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:25.527 00:27:25.527 Run status group 0 (all jobs): 00:27:25.527 READ: bw=17.3MiB/s (18.1MB/s), 652KiB/s-797KiB/s (668kB/s-817kB/s), io=174MiB (182MB), run=10006-10083msec 00:27:25.527 ----------------------------------------------------- 00:27:25.527 Suppressions used: 00:27:25.527 count bytes template 00:27:25.527 45 402 /usr/src/fio/parse.c 00:27:25.527 1 8 libtcmalloc_minimal.so 00:27:25.527 1 904 libcrypto.so 00:27:25.527 ----------------------------------------------------- 00:27:25.527 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 bdev_null0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 [2024-12-16 05:45:05.694584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 bdev_null1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.527 { 00:27:25.527 "params": { 00:27:25.527 "name": "Nvme$subsystem", 00:27:25.527 "trtype": "$TEST_TRANSPORT", 00:27:25.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.527 "adrfam": "ipv4", 00:27:25.527 "trsvcid": "$NVMF_PORT", 00:27:25.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.527 "hdgst": ${hdgst:-false}, 00:27:25.527 "ddgst": ${ddgst:-false} 00:27:25.527 }, 00:27:25.527 "method": "bdev_nvme_attach_controller" 00:27:25.527 } 00:27:25.527 EOF 00:27:25.527 )") 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:25.527 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.528 { 00:27:25.528 "params": { 00:27:25.528 "name": "Nvme$subsystem", 00:27:25.528 "trtype": "$TEST_TRANSPORT", 00:27:25.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.528 "adrfam": "ipv4", 00:27:25.528 "trsvcid": "$NVMF_PORT", 00:27:25.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.528 "hdgst": ${hdgst:-false}, 00:27:25.528 "ddgst": ${ddgst:-false} 00:27:25.528 }, 00:27:25.528 "method": "bdev_nvme_attach_controller" 00:27:25.528 } 00:27:25.528 EOF 00:27:25.528 )") 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:25.528 "params": { 00:27:25.528 "name": "Nvme0", 00:27:25.528 "trtype": "tcp", 00:27:25.528 "traddr": "10.0.0.3", 00:27:25.528 "adrfam": "ipv4", 00:27:25.528 "trsvcid": "4420", 00:27:25.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:25.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:25.528 "hdgst": false, 00:27:25.528 "ddgst": false 00:27:25.528 }, 00:27:25.528 "method": "bdev_nvme_attach_controller" 00:27:25.528 },{ 00:27:25.528 "params": { 00:27:25.528 "name": "Nvme1", 00:27:25.528 "trtype": "tcp", 00:27:25.528 "traddr": "10.0.0.3", 00:27:25.528 "adrfam": "ipv4", 00:27:25.528 "trsvcid": "4420", 00:27:25.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.528 "hdgst": false, 00:27:25.528 "ddgst": false 00:27:25.528 }, 00:27:25.528 "method": "bdev_nvme_attach_controller" 00:27:25.528 }' 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:25.528 05:45:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.787 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:25.787 ... 00:27:25.787 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:25.787 ... 00:27:25.787 fio-3.35 00:27:25.787 Starting 4 threads 00:27:32.352 00:27:32.352 filename0: (groupid=0, jobs=1): err= 0: pid=91705: Mon Dec 16 05:45:11 2024 00:27:32.352 read: IOPS=1633, BW=12.8MiB/s (13.4MB/s)(63.8MiB/5001msec) 00:27:32.352 slat (nsec): min=5540, max=75636, avg=18165.59, stdev=6164.02 00:27:32.352 clat (usec): min=3706, max=9521, avg=4827.37, stdev=236.38 00:27:32.352 lat (usec): min=3732, max=9547, avg=4845.54, stdev=236.28 00:27:32.352 clat percentiles (usec): 00:27:32.352 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4621], 20.00th=[ 4686], 00:27:32.352 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4817], 00:27:32.352 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:27:32.352 | 99.00th=[ 5473], 99.50th=[ 5538], 99.90th=[ 5997], 99.95th=[ 9110], 00:27:32.352 | 99.99th=[ 9503] 00:27:32.352 bw ( KiB/s): min=12800, max=13184, per=22.90%, avg=13058.78, stdev=154.42, samples=9 00:27:32.352 iops : min= 1600, max= 1648, avg=1632.33, stdev=19.31, samples=9 00:27:32.352 lat (msec) : 4=0.10%, 10=99.90% 00:27:32.352 cpu : usr=91.76%, sys=7.42%, ctx=5, majf=0, minf=1074 00:27:32.352 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.352 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.352 issued rwts: total=8168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.352 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:32.352 filename0: (groupid=0, jobs=1): err= 0: pid=91706: Mon Dec 16 05:45:11 2024 00:27:32.352 read: IOPS=2226, BW=17.4MiB/s (18.2MB/s)(87.0MiB/5003msec) 00:27:32.352 slat (nsec): min=5414, max=58366, avg=12824.90, stdev=5565.16 00:27:32.352 clat (usec): min=1001, max=8718, avg=3560.70, stdev=1261.54 00:27:32.352 lat (usec): min=1011, max=8735, avg=3573.53, stdev=1261.19 00:27:32.352 clat percentiles (usec): 00:27:32.352 | 1.00th=[ 1549], 5.00th=[ 1582], 10.00th=[ 1614], 20.00th=[ 1680], 00:27:32.352 | 30.00th=[ 3294], 40.00th=[ 3458], 50.00th=[ 3818], 60.00th=[ 4424], 00:27:32.352 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:27:32.352 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 5473], 99.95th=[ 5604], 00:27:32.352 | 99.99th=[ 5800] 00:27:32.352 bw ( KiB/s): min=16271, max=18256, per=31.22%, avg=17799.00, stdev=612.67, samples=9 00:27:32.352 iops : min= 2033, max= 2282, avg=2224.78, stdev=76.86, samples=9 00:27:32.352 lat (msec) : 2=26.54%, 4=25.63%, 10=47.83% 00:27:32.352 cpu : usr=90.88%, sys=7.98%, ctx=56, majf=0, minf=1060 00:27:32.352 IO depths : 1=0.1%, 2=0.7%, 4=63.3%, 8=36.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.352 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.352 issued rwts: total=11137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.352 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:32.352 filename1: (groupid=0, jobs=1): err= 0: pid=91707: Mon Dec 16 05:45:11 2024 00:27:32.352 read: IOPS=1634, BW=12.8MiB/s (13.4MB/s)(63.9MiB/5003msec) 00:27:32.352 slat (nsec): min=5580, max=76146, avg=18688.93, stdev=6208.71 00:27:32.352 clat (usec): min=2356, max=8362, avg=4821.60, stdev=229.97 00:27:32.352 lat (usec): min=2372, max=8384, avg=4840.29, stdev=230.02 00:27:32.352 clat percentiles (usec): 00:27:32.352 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4621], 20.00th=[ 4686], 00:27:32.353 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4817], 00:27:32.353 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:27:32.353 | 99.00th=[ 5473], 99.50th=[ 5538], 99.90th=[ 5932], 99.95th=[ 8029], 00:27:32.353 | 99.99th=[ 8356] 00:27:32.353 bw ( KiB/s): min=12912, max=13312, per=22.94%, avg=13080.00, stdev=133.07, samples=10 00:27:32.353 iops : min= 1614, max= 1664, avg=1635.00, stdev=16.63, samples=10 00:27:32.353 lat (msec) : 4=0.20%, 10=99.80% 00:27:32.353 cpu : usr=91.46%, sys=7.66%, ctx=7, majf=0, minf=1073 00:27:32.353 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.353 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.353 issued rwts: total=8176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.353 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:32.353 filename1: (groupid=0, jobs=1): err= 0: pid=91708: Mon Dec 16 05:45:11 2024 00:27:32.353 read: IOPS=1634, BW=12.8MiB/s (13.4MB/s)(63.9MiB/5001msec) 00:27:32.353 slat (usec): min=5, max=279, avg=18.65, stdev= 6.95 00:27:32.353 clat (usec): min=2381, max=7126, avg=4819.27, stdev=214.51 00:27:32.353 lat (usec): min=2395, max=7148, avg=4837.93, stdev=214.93 00:27:32.353 clat percentiles (usec): 00:27:32.353 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4621], 20.00th=[ 4686], 00:27:32.353 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4817], 00:27:32.353 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:27:32.353 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 5932], 99.95th=[ 6849], 00:27:32.353 | 99.99th=[ 7111] 00:27:32.353 bw ( KiB/s): min=12800, max=13312, per=22.92%, avg=13070.22, stdev=162.47, samples=9 00:27:32.353 iops : min= 1600, max= 1664, avg=1633.78, stdev=20.31, samples=9 00:27:32.353 lat (msec) : 4=0.20%, 10=99.80% 00:27:32.353 cpu : usr=91.04%, sys=7.88%, ctx=95, majf=0, minf=1075 00:27:32.353 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.353 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.353 issued rwts: total=8176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.353 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:32.353 00:27:32.353 Run status group 0 (all jobs): 00:27:32.353 READ: bw=55.7MiB/s (58.4MB/s), 12.8MiB/s-17.4MiB/s (13.4MB/s-18.2MB/s), io=279MiB (292MB), run=5001-5003msec 00:27:32.921 ----------------------------------------------------- 00:27:32.921 Suppressions used: 00:27:32.921 count bytes template 00:27:32.921 6 52 /usr/src/fio/parse.c 00:27:32.921 1 8 libtcmalloc_minimal.so 00:27:32.921 1 904 libcrypto.so 00:27:32.921 ----------------------------------------------------- 00:27:32.921 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.921 05:45:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 05:45:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.921 05:45:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:32.921 05:45:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.921 05:45:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 ************************************ 00:27:32.921 END TEST fio_dif_rand_params 00:27:32.921 ************************************ 00:27:32.921 05:45:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.921 00:27:32.921 real 0m27.123s 00:27:32.921 user 2m6.316s 00:27:32.921 sys 0m9.561s 00:27:32.921 05:45:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.921 05:45:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 05:45:13 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:32.921 05:45:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:32.921 05:45:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.921 05:45:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 ************************************ 00:27:32.921 START TEST fio_dif_digest 00:27:32.921 ************************************ 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 bdev_null0 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.921 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:32.922 [2024-12-16 05:45:13.102880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:32.922 { 00:27:32.922 "params": { 00:27:32.922 "name": "Nvme$subsystem", 00:27:32.922 "trtype": "$TEST_TRANSPORT", 00:27:32.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.922 "adrfam": "ipv4", 00:27:32.922 "trsvcid": "$NVMF_PORT", 00:27:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.922 "hdgst": ${hdgst:-false}, 00:27:32.922 "ddgst": ${ddgst:-false} 00:27:32.922 }, 00:27:32.922 "method": "bdev_nvme_attach_controller" 00:27:32.922 } 00:27:32.922 EOF 00:27:32.922 )") 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:32.922 "params": { 00:27:32.922 "name": "Nvme0", 00:27:32.922 "trtype": "tcp", 00:27:32.922 "traddr": "10.0.0.3", 00:27:32.922 "adrfam": "ipv4", 00:27:32.922 "trsvcid": "4420", 00:27:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.922 "hdgst": true, 00:27:32.922 "ddgst": true 00:27:32.922 }, 00:27:32.922 "method": "bdev_nvme_attach_controller" 00:27:32.922 }' 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:32.922 05:45:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.181 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:33.181 ... 00:27:33.181 fio-3.35 00:27:33.181 Starting 3 threads 00:27:45.457 00:27:45.457 filename0: (groupid=0, jobs=1): err= 0: pid=91818: Mon Dec 16 05:45:24 2024 00:27:45.457 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10011msec) 00:27:45.457 slat (nsec): min=5484, max=71431, avg=19096.37, stdev=6842.91 00:27:45.457 clat (usec): min=11517, max=22219, avg=15054.44, stdev=674.11 00:27:45.457 lat (usec): min=11531, max=22243, avg=15073.53, stdev=674.68 00:27:45.457 clat percentiles (usec): 00:27:45.457 | 1.00th=[14484], 5.00th=[14484], 10.00th=[14615], 20.00th=[14615], 00:27:45.457 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[15008], 00:27:45.457 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15795], 95.00th=[16450], 00:27:45.457 | 99.00th=[17171], 99.50th=[17433], 99.90th=[22152], 99.95th=[22152], 00:27:45.457 | 99.99th=[22152] 00:27:45.457 bw ( KiB/s): min=24576, max=26112, per=33.40%, avg=25465.26, stdev=385.12, samples=19 00:27:45.457 iops : min= 192, max= 204, avg=198.95, stdev= 3.01, samples=19 00:27:45.457 lat (msec) : 20=99.85%, 50=0.15% 00:27:45.457 cpu : usr=93.15%, sys=6.26%, ctx=22, majf=0, minf=1073 00:27:45.457 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.457 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.457 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:45.457 filename0: (groupid=0, jobs=1): err= 0: pid=91819: Mon Dec 16 05:45:24 2024 00:27:45.457 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(248MiB/10002msec) 00:27:45.457 slat (nsec): min=6000, max=59390, avg=18780.73, stdev=6752.29 00:27:45.457 clat (usec): min=14417, max=22385, avg=15065.23, stdev=671.77 00:27:45.457 lat (usec): min=14428, max=22409, avg=15084.01, stdev=672.31 00:27:45.457 clat percentiles (usec): 00:27:45.457 | 1.00th=[14484], 5.00th=[14484], 10.00th=[14615], 20.00th=[14615], 00:27:45.457 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[15008], 00:27:45.457 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15795], 95.00th=[16450], 00:27:45.457 | 99.00th=[17171], 99.50th=[17433], 99.90th=[22414], 99.95th=[22414], 00:27:45.457 | 99.99th=[22414] 00:27:45.457 bw ( KiB/s): min=24576, max=26112, per=33.40%, avg=25465.26, stdev=385.12, samples=19 00:27:45.457 iops : min= 192, max= 204, avg=198.95, stdev= 3.01, samples=19 00:27:45.457 lat (msec) : 20=99.85%, 50=0.15% 00:27:45.457 cpu : usr=92.54%, sys=6.87%, ctx=15, majf=0, minf=1075 00:27:45.457 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.457 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.457 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:45.457 filename0: (groupid=0, jobs=1): err= 0: pid=91820: Mon Dec 16 05:45:24 2024 00:27:45.457 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10012msec) 00:27:45.457 slat (nsec): min=5429, max=61291, avg=19098.16, stdev=6536.54 00:27:45.457 clat (usec): min=11515, max=22209, avg=15055.69, stdev=674.97 00:27:45.457 lat (usec): min=11529, max=22228, avg=15074.78, stdev=675.41 00:27:45.457 clat percentiles (usec): 00:27:45.457 | 1.00th=[14484], 5.00th=[14484], 10.00th=[14615], 20.00th=[14615], 00:27:45.457 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[15008], 00:27:45.457 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15795], 95.00th=[16450], 00:27:45.457 | 99.00th=[17171], 99.50th=[17433], 99.90th=[22152], 99.95th=[22152], 00:27:45.457 | 99.99th=[22152] 00:27:45.457 bw ( KiB/s): min=24576, max=26112, per=33.39%, avg=25462.58, stdev=386.19, samples=19 00:27:45.457 iops : min= 192, max= 204, avg=198.89, stdev= 3.03, samples=19 00:27:45.457 lat (msec) : 20=99.85%, 50=0.15% 00:27:45.457 cpu : usr=92.99%, sys=6.41%, ctx=19, majf=0, minf=1075 00:27:45.457 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.457 issued rwts: total=1989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.457 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:45.457 00:27:45.457 Run status group 0 (all jobs): 00:27:45.457 READ: bw=74.5MiB/s (78.1MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=746MiB (782MB), run=10002-10012msec 00:27:45.457 ----------------------------------------------------- 00:27:45.457 Suppressions used: 00:27:45.458 count bytes template 00:27:45.458 5 44 /usr/src/fio/parse.c 00:27:45.458 1 8 libtcmalloc_minimal.so 00:27:45.458 1 904 libcrypto.so 00:27:45.458 ----------------------------------------------------- 00:27:45.458 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.458 00:27:45.458 real 0m12.298s 00:27:45.458 user 0m29.775s 00:27:45.458 sys 0m2.306s 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.458 05:45:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:45.458 ************************************ 00:27:45.458 END TEST fio_dif_digest 00:27:45.458 ************************************ 00:27:45.458 05:45:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:45.458 05:45:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:45.458 rmmod nvme_tcp 00:27:45.458 rmmod nvme_fabrics 00:27:45.458 rmmod nvme_keyring 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 91063 ']' 00:27:45.458 05:45:25 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 91063 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 91063 ']' 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 91063 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91063 00:27:45.458 killing process with pid 91063 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91063' 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@973 -- # kill 91063 00:27:45.458 05:45:25 nvmf_dif -- common/autotest_common.sh@978 -- # wait 91063 00:27:46.395 05:45:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:27:46.395 05:45:26 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:46.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:46.653 Waiting for block devices as requested 00:27:46.653 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:46.653 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:46.912 05:45:26 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:46.912 05:45:27 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:46.912 05:45:27 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:46.912 05:45:27 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:46.912 05:45:27 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:46.912 05:45:27 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:46.912 05:45:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.912 05:45:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:46.912 05:45:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.912 05:45:27 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:27:46.912 ************************************ 00:27:46.912 END TEST nvmf_dif 00:27:46.912 ************************************ 00:27:46.912 00:27:46.912 real 1m7.973s 00:27:46.912 user 4m3.454s 00:27:46.912 sys 0m20.077s 00:27:46.912 05:45:27 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.912 05:45:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:47.171 05:45:27 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:47.171 05:45:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:47.171 05:45:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:47.171 05:45:27 -- common/autotest_common.sh@10 -- # set +x 00:27:47.171 ************************************ 00:27:47.171 START TEST nvmf_abort_qd_sizes 00:27:47.171 ************************************ 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:47.171 * Looking for test storage... 00:27:47.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:47.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.171 --rc genhtml_branch_coverage=1 00:27:47.171 --rc genhtml_function_coverage=1 00:27:47.171 --rc genhtml_legend=1 00:27:47.171 --rc geninfo_all_blocks=1 00:27:47.171 --rc geninfo_unexecuted_blocks=1 00:27:47.171 00:27:47.171 ' 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:47.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.171 --rc genhtml_branch_coverage=1 00:27:47.171 --rc genhtml_function_coverage=1 00:27:47.171 --rc genhtml_legend=1 00:27:47.171 --rc geninfo_all_blocks=1 00:27:47.171 --rc geninfo_unexecuted_blocks=1 00:27:47.171 00:27:47.171 ' 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:47.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.171 --rc genhtml_branch_coverage=1 00:27:47.171 --rc genhtml_function_coverage=1 00:27:47.171 --rc genhtml_legend=1 00:27:47.171 --rc geninfo_all_blocks=1 00:27:47.171 --rc geninfo_unexecuted_blocks=1 00:27:47.171 00:27:47.171 ' 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:47.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.171 --rc genhtml_branch_coverage=1 00:27:47.171 --rc genhtml_function_coverage=1 00:27:47.171 --rc genhtml_legend=1 00:27:47.171 --rc geninfo_all_blocks=1 00:27:47.171 --rc geninfo_unexecuted_blocks=1 00:27:47.171 00:27:47.171 ' 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.171 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:47.172 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:47.172 Cannot find device "nvmf_init_br" 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:27:47.172 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:47.430 Cannot find device "nvmf_init_br2" 00:27:47.430 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:27:47.430 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:47.430 Cannot find device "nvmf_tgt_br" 00:27:47.430 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:27:47.430 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:47.430 Cannot find device "nvmf_tgt_br2" 00:27:47.430 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:47.431 Cannot find device "nvmf_init_br" 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:47.431 Cannot find device "nvmf_init_br2" 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:47.431 Cannot find device "nvmf_tgt_br" 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:47.431 Cannot find device "nvmf_tgt_br2" 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:47.431 Cannot find device "nvmf_br" 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:47.431 Cannot find device "nvmf_init_if" 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:47.431 Cannot find device "nvmf_init_if2" 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:47.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:47.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:47.431 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:47.690 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:47.690 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:27:47.690 00:27:47.690 --- 10.0.0.3 ping statistics --- 00:27:47.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.690 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:47.690 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:47.690 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:27:47.690 00:27:47.690 --- 10.0.0.4 ping statistics --- 00:27:47.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.690 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:47.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:27:47.690 00:27:47.690 --- 10.0.0.1 ping statistics --- 00:27:47.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.690 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:47.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:27:47.690 00:27:47.690 --- 10.0.0.2 ping statistics --- 00:27:47.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.690 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:47.690 05:45:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:48.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:48.517 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:48.517 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:48.517 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=92475 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 92475 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 92475 ']' 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.518 05:45:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:48.776 [2024-12-16 05:45:28.825950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:48.777 [2024-12-16 05:45:28.826125] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.777 [2024-12-16 05:45:29.016794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.035 [2024-12-16 05:45:29.148629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.035 [2024-12-16 05:45:29.148697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.035 [2024-12-16 05:45:29.148721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.035 [2024-12-16 05:45:29.148737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.035 [2024-12-16 05:45:29.148753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.035 [2024-12-16 05:45:29.150974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.035 [2024-12-16 05:45:29.151124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.035 [2024-12-16 05:45:29.151215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.035 [2024-12-16 05:45:29.151422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.294 [2024-12-16 05:45:29.372525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:49.552 05:45:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.552 05:45:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:27:49.552 05:45:29 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:49.552 05:45:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:49.552 05:45:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:49.811 05:45:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:49.811 ************************************ 00:27:49.811 START TEST spdk_target_abort 00:27:49.811 ************************************ 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.811 spdk_targetn1 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.811 [2024-12-16 05:45:29.954576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:49.811 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.812 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.812 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.812 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:27:49.812 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.812 05:45:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:49.812 [2024-12-16 05:45:30.000341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:49.812 05:45:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:54.000 Initializing NVMe Controllers 00:27:54.000 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:27:54.000 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:54.000 Initialization complete. Launching workers. 00:27:54.000 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8290, failed: 0 00:27:54.000 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1035, failed to submit 7255 00:27:54.000 success 692, unsuccessful 343, failed 0 00:27:54.000 05:45:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:54.000 05:45:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:57.287 Initializing NVMe Controllers 00:27:57.287 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:27:57.287 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:57.287 Initialization complete. Launching workers. 00:27:57.287 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:27:57.287 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1170, failed to submit 7830 00:27:57.287 success 360, unsuccessful 810, failed 0 00:27:57.287 05:45:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:57.287 05:45:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:00.573 Initializing NVMe Controllers 00:28:00.573 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:00.573 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:00.573 Initialization complete. Launching workers. 00:28:00.573 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28134, failed: 0 00:28:00.573 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2232, failed to submit 25902 00:28:00.573 success 385, unsuccessful 1847, failed 0 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 92475 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 92475 ']' 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 92475 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92475 00:28:00.573 killing process with pid 92475 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:00.573 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92475' 00:28:00.574 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 92475 00:28:00.574 05:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 92475 00:28:01.141 ************************************ 00:28:01.141 END TEST spdk_target_abort 00:28:01.141 ************************************ 00:28:01.141 00:28:01.141 real 0m11.355s 00:28:01.141 user 0m45.201s 00:28:01.141 sys 0m2.211s 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:01.141 05:45:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:01.141 05:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:01.141 05:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.141 05:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:01.141 ************************************ 00:28:01.141 START TEST kernel_target_abort 00:28:01.141 ************************************ 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:01.141 05:45:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:01.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:01.659 Waiting for block devices as requested 00:28:01.659 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:01.659 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:01.918 No valid GPT data, bailing 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:01.918 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:02.177 No valid GPT data, bailing 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:02.177 No valid GPT data, bailing 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:02.177 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:02.178 No valid GPT data, bailing 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:02.178 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec --hostid=ccafdfa8-c1c5-4fda-89cf-286fc282eeec -a 10.0.0.1 -t tcp -s 4420 00:28:02.437 00:28:02.437 Discovery Log Number of Records 2, Generation counter 2 00:28:02.437 =====Discovery Log Entry 0====== 00:28:02.437 trtype: tcp 00:28:02.437 adrfam: ipv4 00:28:02.437 subtype: current discovery subsystem 00:28:02.437 treq: not specified, sq flow control disable supported 00:28:02.437 portid: 1 00:28:02.437 trsvcid: 4420 00:28:02.437 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:02.437 traddr: 10.0.0.1 00:28:02.437 eflags: none 00:28:02.437 sectype: none 00:28:02.437 =====Discovery Log Entry 1====== 00:28:02.437 trtype: tcp 00:28:02.437 adrfam: ipv4 00:28:02.437 subtype: nvme subsystem 00:28:02.437 treq: not specified, sq flow control disable supported 00:28:02.437 portid: 1 00:28:02.437 trsvcid: 4420 00:28:02.437 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:02.437 traddr: 10.0.0.1 00:28:02.437 eflags: none 00:28:02.437 sectype: none 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:02.437 05:45:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:05.725 Initializing NVMe Controllers 00:28:05.725 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:05.725 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:05.725 Initialization complete. Launching workers. 00:28:05.725 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25588, failed: 0 00:28:05.725 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25588, failed to submit 0 00:28:05.725 success 0, unsuccessful 25588, failed 0 00:28:05.725 05:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:05.725 05:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:09.013 Initializing NVMe Controllers 00:28:09.013 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:09.013 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:09.013 Initialization complete. Launching workers. 00:28:09.013 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54834, failed: 0 00:28:09.013 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22582, failed to submit 32252 00:28:09.013 success 0, unsuccessful 22582, failed 0 00:28:09.013 05:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:09.013 05:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:12.342 Initializing NVMe Controllers 00:28:12.342 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:12.342 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:12.342 Initialization complete. Launching workers. 00:28:12.342 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59999, failed: 0 00:28:12.342 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15016, failed to submit 44983 00:28:12.342 success 0, unsuccessful 15016, failed 0 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:12.342 05:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:12.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:13.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:13.478 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:13.737 00:28:13.737 real 0m12.485s 00:28:13.737 user 0m6.434s 00:28:13.737 sys 0m3.683s 00:28:13.737 05:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.737 05:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:13.737 ************************************ 00:28:13.737 END TEST kernel_target_abort 00:28:13.737 ************************************ 00:28:13.737 05:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:13.737 05:45:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:13.737 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.737 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:13.737 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.737 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.738 rmmod nvme_tcp 00:28:13.738 rmmod nvme_fabrics 00:28:13.738 rmmod nvme_keyring 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 92475 ']' 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 92475 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 92475 ']' 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 92475 00:28:13.738 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92475) - No such process 00:28:13.738 Process with pid 92475 is not found 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 92475 is not found' 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:13.738 05:45:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:13.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:14.257 Waiting for block devices as requested 00:28:14.257 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:14.257 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:14.257 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:14.516 00:28:14.516 real 0m27.554s 00:28:14.516 user 0m53.009s 00:28:14.516 sys 0m7.364s 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.516 05:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:14.516 ************************************ 00:28:14.516 END TEST nvmf_abort_qd_sizes 00:28:14.516 ************************************ 00:28:14.776 05:45:54 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:14.776 05:45:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:14.776 05:45:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.776 05:45:54 -- common/autotest_common.sh@10 -- # set +x 00:28:14.776 ************************************ 00:28:14.776 START TEST keyring_file 00:28:14.776 ************************************ 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:14.776 * Looking for test storage... 00:28:14.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.776 05:45:54 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:14.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.776 --rc genhtml_branch_coverage=1 00:28:14.776 --rc genhtml_function_coverage=1 00:28:14.776 --rc genhtml_legend=1 00:28:14.776 --rc geninfo_all_blocks=1 00:28:14.776 --rc geninfo_unexecuted_blocks=1 00:28:14.776 00:28:14.776 ' 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:14.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.776 --rc genhtml_branch_coverage=1 00:28:14.776 --rc genhtml_function_coverage=1 00:28:14.776 --rc genhtml_legend=1 00:28:14.776 --rc geninfo_all_blocks=1 00:28:14.776 --rc geninfo_unexecuted_blocks=1 00:28:14.776 00:28:14.776 ' 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:14.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.776 --rc genhtml_branch_coverage=1 00:28:14.776 --rc genhtml_function_coverage=1 00:28:14.776 --rc genhtml_legend=1 00:28:14.776 --rc geninfo_all_blocks=1 00:28:14.776 --rc geninfo_unexecuted_blocks=1 00:28:14.776 00:28:14.776 ' 00:28:14.776 05:45:54 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:14.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.776 --rc genhtml_branch_coverage=1 00:28:14.776 --rc genhtml_function_coverage=1 00:28:14.776 --rc genhtml_legend=1 00:28:14.776 --rc geninfo_all_blocks=1 00:28:14.776 --rc geninfo_unexecuted_blocks=1 00:28:14.776 00:28:14.776 ' 00:28:14.776 05:45:54 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:14.776 05:45:54 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:14.776 05:45:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:14.776 05:45:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.776 05:45:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:14.777 05:45:55 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.777 05:45:55 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.777 05:45:55 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.777 05:45:55 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.777 05:45:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.777 05:45:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.777 05:45:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.777 05:45:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:14.777 05:45:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:14.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:14.777 05:45:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:14.777 05:45:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:14.777 05:45:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:14.777 05:45:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:14.777 05:45:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:14.777 05:45:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VzTifPWAGw 00:28:14.777 05:45:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:14.777 05:45:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VzTifPWAGw 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VzTifPWAGw 00:28:15.035 05:45:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VzTifPWAGw 00:28:15.035 05:45:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.20ShuOAzOa 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:15.035 05:45:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:15.035 05:45:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:15.035 05:45:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:15.035 05:45:55 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:15.035 05:45:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:15.035 05:45:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.20ShuOAzOa 00:28:15.035 05:45:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.20ShuOAzOa 00:28:15.035 05:45:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.20ShuOAzOa 00:28:15.035 05:45:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=93496 00:28:15.035 05:45:55 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.035 05:45:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 93496 00:28:15.035 05:45:55 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 93496 ']' 00:28:15.035 05:45:55 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.035 05:45:55 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.035 05:45:55 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.035 05:45:55 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.035 05:45:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:15.035 [2024-12-16 05:45:55.272211] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:15.035 [2024-12-16 05:45:55.272393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93496 ] 00:28:15.293 [2024-12-16 05:45:55.459262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.552 [2024-12-16 05:45:55.583430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.811 [2024-12-16 05:45:55.815625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:16.070 05:45:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:16.070 [2024-12-16 05:45:56.245020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.070 null0 00:28:16.070 [2024-12-16 05:45:56.276971] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:16.070 [2024-12-16 05:45:56.277202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.070 05:45:56 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.070 05:45:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:16.070 [2024-12-16 05:45:56.304990] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:16.070 request: 00:28:16.070 { 00:28:16.070 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.070 "secure_channel": false, 00:28:16.070 "listen_address": { 00:28:16.070 "trtype": "tcp", 00:28:16.070 "traddr": "127.0.0.1", 00:28:16.070 "trsvcid": "4420" 00:28:16.070 }, 00:28:16.070 "method": "nvmf_subsystem_add_listener", 00:28:16.070 "req_id": 1 00:28:16.070 } 00:28:16.070 Got JSON-RPC error response 00:28:16.070 response: 00:28:16.070 { 00:28:16.070 "code": -32602, 00:28:16.070 "message": "Invalid parameters" 00:28:16.070 } 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.071 05:45:56 keyring_file -- keyring/file.sh@47 -- # bperfpid=93509 00:28:16.071 05:45:56 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:16.071 05:45:56 keyring_file -- keyring/file.sh@49 -- # waitforlisten 93509 /var/tmp/bperf.sock 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 93509 ']' 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.071 05:45:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:16.330 [2024-12-16 05:45:56.422620] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:16.330 [2024-12-16 05:45:56.422786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93509 ] 00:28:16.589 [2024-12-16 05:45:56.608984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.589 [2024-12-16 05:45:56.733721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.848 [2024-12-16 05:45:56.906265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:17.108 05:45:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.108 05:45:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:17.108 05:45:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:17.108 05:45:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:17.367 05:45:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.20ShuOAzOa 00:28:17.367 05:45:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.20ShuOAzOa 00:28:17.626 05:45:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:17.626 05:45:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:17.626 05:45:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.626 05:45:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.626 05:45:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:17.885 05:45:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VzTifPWAGw == \/\t\m\p\/\t\m\p\.\V\z\T\i\f\P\W\A\G\w ]] 00:28:17.885 05:45:58 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:17.885 05:45:58 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:17.885 05:45:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.885 05:45:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:17.885 05:45:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:18.143 05:45:58 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.20ShuOAzOa == \/\t\m\p\/\t\m\p\.\2\0\S\h\u\O\A\z\O\a ]] 00:28:18.143 05:45:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:18.143 05:45:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:18.143 05:45:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:18.143 05:45:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:18.143 05:45:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:18.143 05:45:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:18.402 05:45:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:18.402 05:45:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:18.402 05:45:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:18.402 05:45:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:18.403 05:45:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:18.403 05:45:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:18.403 05:45:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:18.661 05:45:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:18.661 05:45:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:18.661 05:45:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:18.920 [2024-12-16 05:45:59.063247] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:18.920 nvme0n1 00:28:18.920 05:45:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:18.920 05:45:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:18.920 05:45:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:18.920 05:45:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:18.920 05:45:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:19.179 05:45:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:19.438 05:45:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:19.438 05:45:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:19.438 05:45:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:19.438 05:45:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:19.438 05:45:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:19.438 05:45:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:19.438 05:45:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:19.696 05:45:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:19.696 05:45:59 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:19.696 Running I/O for 1 seconds... 00:28:20.633 9694.00 IOPS, 37.87 MiB/s 00:28:20.633 Latency(us) 00:28:20.633 [2024-12-16T05:46:00.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.633 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:20.633 nvme0n1 : 1.01 9745.16 38.07 0.00 0.00 13085.63 6166.34 23235.49 00:28:20.633 [2024-12-16T05:46:00.892Z] =================================================================================================================== 00:28:20.633 [2024-12-16T05:46:00.892Z] Total : 9745.16 38.07 0.00 0.00 13085.63 6166.34 23235.49 00:28:20.633 { 00:28:20.633 "results": [ 00:28:20.633 { 00:28:20.633 "job": "nvme0n1", 00:28:20.633 "core_mask": "0x2", 00:28:20.633 "workload": "randrw", 00:28:20.633 "percentage": 50, 00:28:20.633 "status": "finished", 00:28:20.633 "queue_depth": 128, 00:28:20.633 "io_size": 4096, 00:28:20.633 "runtime": 1.007885, 00:28:20.633 "iops": 9745.159417989156, 00:28:20.633 "mibps": 38.06702897652014, 00:28:20.633 "io_failed": 0, 00:28:20.633 "io_timeout": 0, 00:28:20.633 "avg_latency_us": 13085.630053497713, 00:28:20.633 "min_latency_us": 6166.341818181818, 00:28:20.633 "max_latency_us": 23235.49090909091 00:28:20.633 } 00:28:20.633 ], 00:28:20.633 "core_count": 1 00:28:20.633 } 00:28:20.892 05:46:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:20.892 05:46:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:21.150 05:46:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:21.150 05:46:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:21.150 05:46:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:21.150 05:46:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:21.150 05:46:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:21.150 05:46:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:21.408 05:46:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:21.409 05:46:01 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:21.409 05:46:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:21.409 05:46:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:21.409 05:46:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:21.409 05:46:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:21.409 05:46:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:21.667 05:46:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:21.667 05:46:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:21.667 05:46:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:21.667 05:46:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:21.667 05:46:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:21.667 05:46:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.667 05:46:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:21.667 05:46:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.667 05:46:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:21.668 05:46:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:21.926 [2024-12-16 05:46:01.965169] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:21.926 [2024-12-16 05:46:01.965183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:21.926 [2024-12-16 05:46:01.966157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:21.926 [2024-12-16 05:46:01.967149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:21.926 [2024-12-16 05:46:01.967200] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:21.926 [2024-12-16 05:46:01.967232] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:21.926 [2024-12-16 05:46:01.967246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:21.926 request: 00:28:21.926 { 00:28:21.926 "name": "nvme0", 00:28:21.926 "trtype": "tcp", 00:28:21.926 "traddr": "127.0.0.1", 00:28:21.926 "adrfam": "ipv4", 00:28:21.926 "trsvcid": "4420", 00:28:21.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:21.926 "prchk_reftag": false, 00:28:21.926 "prchk_guard": false, 00:28:21.926 "hdgst": false, 00:28:21.926 "ddgst": false, 00:28:21.926 "psk": "key1", 00:28:21.926 "allow_unrecognized_csi": false, 00:28:21.926 "method": "bdev_nvme_attach_controller", 00:28:21.926 "req_id": 1 00:28:21.926 } 00:28:21.926 Got JSON-RPC error response 00:28:21.926 response: 00:28:21.926 { 00:28:21.926 "code": -5, 00:28:21.926 "message": "Input/output error" 00:28:21.926 } 00:28:21.926 05:46:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:21.926 05:46:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:21.926 05:46:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:21.926 05:46:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:21.926 05:46:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:21.926 05:46:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:21.926 05:46:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:21.926 05:46:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:21.926 05:46:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:21.926 05:46:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:22.186 05:46:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:22.186 05:46:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:22.186 05:46:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:22.186 05:46:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:22.186 05:46:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:22.186 05:46:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.186 05:46:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:22.445 05:46:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:22.445 05:46:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:22.445 05:46:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:22.703 05:46:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:22.703 05:46:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:22.703 05:46:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:22.703 05:46:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:22.703 05:46:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:22.963 05:46:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:22.963 05:46:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.VzTifPWAGw 00:28:22.963 05:46:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:22.963 05:46:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:22.963 05:46:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:22.963 05:46:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:22.963 05:46:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.963 05:46:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:22.963 05:46:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.963 05:46:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:22.963 05:46:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:23.222 [2024-12-16 05:46:03.423975] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VzTifPWAGw': 0100660 00:28:23.222 [2024-12-16 05:46:03.424043] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:23.222 request: 00:28:23.222 { 00:28:23.222 "name": "key0", 00:28:23.222 "path": "/tmp/tmp.VzTifPWAGw", 00:28:23.222 "method": "keyring_file_add_key", 00:28:23.222 "req_id": 1 00:28:23.222 } 00:28:23.222 Got JSON-RPC error response 00:28:23.222 response: 00:28:23.222 { 00:28:23.222 "code": -1, 00:28:23.222 "message": "Operation not permitted" 00:28:23.222 } 00:28:23.222 05:46:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:23.222 05:46:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:23.222 05:46:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:23.222 05:46:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:23.222 05:46:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.VzTifPWAGw 00:28:23.222 05:46:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:23.222 05:46:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VzTifPWAGw 00:28:23.480 05:46:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.VzTifPWAGw 00:28:23.481 05:46:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:23.481 05:46:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:23.481 05:46:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.481 05:46:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.481 05:46:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.481 05:46:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:23.739 05:46:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:23.740 05:46:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:23.740 05:46:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:23.740 05:46:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:23.740 05:46:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:23.740 05:46:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:23.740 05:46:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:23.740 05:46:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:23.740 05:46:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:23.740 05:46:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:23.999 [2024-12-16 05:46:04.116249] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VzTifPWAGw': No such file or directory 00:28:23.999 [2024-12-16 05:46:04.116312] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:23.999 [2024-12-16 05:46:04.116355] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:23.999 [2024-12-16 05:46:04.116375] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:23.999 [2024-12-16 05:46:04.116421] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:23.999 [2024-12-16 05:46:04.116434] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:23.999 request: 00:28:23.999 { 00:28:23.999 "name": "nvme0", 00:28:23.999 "trtype": "tcp", 00:28:23.999 "traddr": "127.0.0.1", 00:28:23.999 "adrfam": "ipv4", 00:28:23.999 "trsvcid": "4420", 00:28:23.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.999 "prchk_reftag": false, 00:28:23.999 "prchk_guard": false, 00:28:23.999 "hdgst": false, 00:28:23.999 "ddgst": false, 00:28:23.999 "psk": "key0", 00:28:23.999 "allow_unrecognized_csi": false, 00:28:23.999 "method": "bdev_nvme_attach_controller", 00:28:23.999 "req_id": 1 00:28:23.999 } 00:28:23.999 Got JSON-RPC error response 00:28:23.999 response: 00:28:23.999 { 00:28:23.999 "code": -19, 00:28:23.999 "message": "No such device" 00:28:23.999 } 00:28:23.999 05:46:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:23.999 05:46:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:23.999 05:46:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:23.999 05:46:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:23.999 05:46:04 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:23.999 05:46:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:24.258 05:46:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LiLJNDeg9X 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:24.258 05:46:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:24.258 05:46:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:24.258 05:46:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:24.258 05:46:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:24.258 05:46:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:24.258 05:46:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LiLJNDeg9X 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LiLJNDeg9X 00:28:24.258 05:46:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.LiLJNDeg9X 00:28:24.258 05:46:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LiLJNDeg9X 00:28:24.258 05:46:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LiLJNDeg9X 00:28:24.517 05:46:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:24.517 05:46:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:24.776 nvme0n1 00:28:24.776 05:46:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:24.776 05:46:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:24.776 05:46:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:24.776 05:46:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:24.776 05:46:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:24.776 05:46:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:25.037 05:46:05 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:25.037 05:46:05 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:25.037 05:46:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:25.297 05:46:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:25.297 05:46:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:25.297 05:46:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:25.297 05:46:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:25.297 05:46:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.555 05:46:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:25.555 05:46:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:25.555 05:46:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:25.555 05:46:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:25.555 05:46:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:25.555 05:46:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:25.555 05:46:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:25.814 05:46:06 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:25.814 05:46:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:25.814 05:46:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:26.073 05:46:06 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:26.073 05:46:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.073 05:46:06 keyring_file -- keyring/file.sh@105 -- # jq length 00:28:26.332 05:46:06 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:28:26.332 05:46:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LiLJNDeg9X 00:28:26.332 05:46:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LiLJNDeg9X 00:28:26.591 05:46:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.20ShuOAzOa 00:28:26.591 05:46:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.20ShuOAzOa 00:28:26.850 05:46:07 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:26.850 05:46:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:27.108 nvme0n1 00:28:27.108 05:46:07 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:28:27.108 05:46:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:27.676 05:46:07 keyring_file -- keyring/file.sh@113 -- # config='{ 00:28:27.676 "subsystems": [ 00:28:27.676 { 00:28:27.676 "subsystem": "keyring", 00:28:27.676 "config": [ 00:28:27.676 { 00:28:27.676 "method": "keyring_file_add_key", 00:28:27.676 "params": { 00:28:27.676 "name": "key0", 00:28:27.676 "path": "/tmp/tmp.LiLJNDeg9X" 00:28:27.676 } 00:28:27.676 }, 00:28:27.677 { 00:28:27.677 "method": "keyring_file_add_key", 00:28:27.677 "params": { 00:28:27.677 "name": "key1", 00:28:27.677 "path": "/tmp/tmp.20ShuOAzOa" 00:28:27.677 } 00:28:27.677 } 00:28:27.677 ] 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "subsystem": "iobuf", 00:28:27.677 "config": [ 00:28:27.677 { 00:28:27.677 "method": "iobuf_set_options", 00:28:27.677 "params": { 00:28:27.677 "small_pool_count": 8192, 00:28:27.677 "large_pool_count": 1024, 00:28:27.677 "small_bufsize": 8192, 00:28:27.677 "large_bufsize": 135168, 00:28:27.677 "enable_numa": false 00:28:27.677 } 00:28:27.677 } 00:28:27.677 ] 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "subsystem": "sock", 00:28:27.677 "config": [ 00:28:27.677 { 00:28:27.677 "method": "sock_set_default_impl", 00:28:27.677 "params": { 00:28:27.677 "impl_name": "uring" 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "sock_impl_set_options", 00:28:27.677 "params": { 00:28:27.677 "impl_name": "ssl", 00:28:27.677 "recv_buf_size": 4096, 00:28:27.677 "send_buf_size": 4096, 00:28:27.677 "enable_recv_pipe": true, 00:28:27.677 "enable_quickack": false, 00:28:27.677 "enable_placement_id": 0, 00:28:27.677 "enable_zerocopy_send_server": true, 00:28:27.677 "enable_zerocopy_send_client": false, 00:28:27.677 "zerocopy_threshold": 0, 00:28:27.677 "tls_version": 0, 00:28:27.677 "enable_ktls": false 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "sock_impl_set_options", 00:28:27.677 "params": { 00:28:27.677 "impl_name": "posix", 00:28:27.677 "recv_buf_size": 2097152, 00:28:27.677 "send_buf_size": 2097152, 00:28:27.677 "enable_recv_pipe": true, 00:28:27.677 "enable_quickack": false, 00:28:27.677 "enable_placement_id": 0, 00:28:27.677 "enable_zerocopy_send_server": true, 00:28:27.677 "enable_zerocopy_send_client": false, 00:28:27.677 "zerocopy_threshold": 0, 00:28:27.677 "tls_version": 0, 00:28:27.677 "enable_ktls": false 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "sock_impl_set_options", 00:28:27.677 "params": { 00:28:27.677 "impl_name": "uring", 00:28:27.677 "recv_buf_size": 2097152, 00:28:27.677 "send_buf_size": 2097152, 00:28:27.677 "enable_recv_pipe": true, 00:28:27.677 "enable_quickack": false, 00:28:27.677 "enable_placement_id": 0, 00:28:27.677 "enable_zerocopy_send_server": false, 00:28:27.677 "enable_zerocopy_send_client": false, 00:28:27.677 "zerocopy_threshold": 0, 00:28:27.677 "tls_version": 0, 00:28:27.677 "enable_ktls": false 00:28:27.677 } 00:28:27.677 } 00:28:27.677 ] 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "subsystem": "vmd", 00:28:27.677 "config": [] 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "subsystem": "accel", 00:28:27.677 "config": [ 00:28:27.677 { 00:28:27.677 "method": "accel_set_options", 00:28:27.677 "params": { 00:28:27.677 "small_cache_size": 128, 00:28:27.677 "large_cache_size": 16, 00:28:27.677 "task_count": 2048, 00:28:27.677 "sequence_count": 2048, 00:28:27.677 "buf_count": 2048 00:28:27.677 } 00:28:27.677 } 00:28:27.677 ] 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "subsystem": "bdev", 00:28:27.677 "config": [ 00:28:27.677 { 00:28:27.677 "method": "bdev_set_options", 00:28:27.677 "params": { 00:28:27.677 "bdev_io_pool_size": 65535, 00:28:27.677 "bdev_io_cache_size": 256, 00:28:27.677 "bdev_auto_examine": true, 00:28:27.677 "iobuf_small_cache_size": 128, 00:28:27.677 "iobuf_large_cache_size": 16 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "bdev_raid_set_options", 00:28:27.677 "params": { 00:28:27.677 "process_window_size_kb": 1024, 00:28:27.677 "process_max_bandwidth_mb_sec": 0 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "bdev_iscsi_set_options", 00:28:27.677 "params": { 00:28:27.677 "timeout_sec": 30 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "bdev_nvme_set_options", 00:28:27.677 "params": { 00:28:27.677 "action_on_timeout": "none", 00:28:27.677 "timeout_us": 0, 00:28:27.677 "timeout_admin_us": 0, 00:28:27.677 "keep_alive_timeout_ms": 10000, 00:28:27.677 "arbitration_burst": 0, 00:28:27.677 "low_priority_weight": 0, 00:28:27.677 "medium_priority_weight": 0, 00:28:27.677 "high_priority_weight": 0, 00:28:27.677 "nvme_adminq_poll_period_us": 10000, 00:28:27.677 "nvme_ioq_poll_period_us": 0, 00:28:27.677 "io_queue_requests": 512, 00:28:27.677 "delay_cmd_submit": true, 00:28:27.677 "transport_retry_count": 4, 00:28:27.677 "bdev_retry_count": 3, 00:28:27.677 "transport_ack_timeout": 0, 00:28:27.677 "ctrlr_loss_timeout_sec": 0, 00:28:27.677 "reconnect_delay_sec": 0, 00:28:27.677 "fast_io_fail_timeout_sec": 0, 00:28:27.677 "disable_auto_failback": false, 00:28:27.677 "generate_uuids": false, 00:28:27.677 "transport_tos": 0, 00:28:27.677 "nvme_error_stat": false, 00:28:27.677 "rdma_srq_size": 0, 00:28:27.677 "io_path_stat": false, 00:28:27.677 "allow_accel_sequence": false, 00:28:27.677 "rdma_max_cq_size": 0, 00:28:27.677 "rdma_cm_event_timeout_ms": 0, 00:28:27.677 "dhchap_digests": [ 00:28:27.677 "sha256", 00:28:27.677 "sha384", 00:28:27.677 "sha512" 00:28:27.677 ], 00:28:27.677 "dhchap_dhgroups": [ 00:28:27.677 "null", 00:28:27.677 "ffdhe2048", 00:28:27.677 "ffdhe3072", 00:28:27.677 "ffdhe4096", 00:28:27.677 "ffdhe6144", 00:28:27.677 "ffdhe8192" 00:28:27.677 ], 00:28:27.677 "rdma_umr_per_io": false 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "bdev_nvme_attach_controller", 00:28:27.677 "params": { 00:28:27.677 "name": "nvme0", 00:28:27.677 "trtype": "TCP", 00:28:27.677 "adrfam": "IPv4", 00:28:27.677 "traddr": "127.0.0.1", 00:28:27.677 "trsvcid": "4420", 00:28:27.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.677 "prchk_reftag": false, 00:28:27.677 "prchk_guard": false, 00:28:27.677 "ctrlr_loss_timeout_sec": 0, 00:28:27.677 "reconnect_delay_sec": 0, 00:28:27.677 "fast_io_fail_timeout_sec": 0, 00:28:27.677 "psk": "key0", 00:28:27.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:27.677 "hdgst": false, 00:28:27.677 "ddgst": false, 00:28:27.677 "multipath": "multipath" 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "bdev_nvme_set_hotplug", 00:28:27.677 "params": { 00:28:27.677 "period_us": 100000, 00:28:27.677 "enable": false 00:28:27.677 } 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "method": "bdev_wait_for_examine" 00:28:27.677 } 00:28:27.677 ] 00:28:27.677 }, 00:28:27.677 { 00:28:27.677 "subsystem": "nbd", 00:28:27.677 "config": [] 00:28:27.677 } 00:28:27.677 ] 00:28:27.677 }' 00:28:27.677 05:46:07 keyring_file -- keyring/file.sh@115 -- # killprocess 93509 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 93509 ']' 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 93509 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93509 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:27.677 killing process with pid 93509 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93509' 00:28:27.677 05:46:07 keyring_file -- common/autotest_common.sh@973 -- # kill 93509 00:28:27.677 Received shutdown signal, test time was about 1.000000 seconds 00:28:27.677 00:28:27.677 Latency(us) 00:28:27.677 [2024-12-16T05:46:07.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.677 [2024-12-16T05:46:07.936Z] =================================================================================================================== 00:28:27.677 [2024-12-16T05:46:07.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.678 05:46:07 keyring_file -- common/autotest_common.sh@978 -- # wait 93509 00:28:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.246 05:46:08 keyring_file -- keyring/file.sh@118 -- # bperfpid=93767 00:28:28.246 05:46:08 keyring_file -- keyring/file.sh@120 -- # waitforlisten 93767 /var/tmp/bperf.sock 00:28:28.246 05:46:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 93767 ']' 00:28:28.246 05:46:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.246 05:46:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.246 05:46:08 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:28.246 05:46:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.246 05:46:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.246 05:46:08 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:28:28.246 "subsystems": [ 00:28:28.246 { 00:28:28.246 "subsystem": "keyring", 00:28:28.246 "config": [ 00:28:28.246 { 00:28:28.246 "method": "keyring_file_add_key", 00:28:28.246 "params": { 00:28:28.246 "name": "key0", 00:28:28.246 "path": "/tmp/tmp.LiLJNDeg9X" 00:28:28.246 } 00:28:28.246 }, 00:28:28.246 { 00:28:28.246 "method": "keyring_file_add_key", 00:28:28.246 "params": { 00:28:28.246 "name": "key1", 00:28:28.246 "path": "/tmp/tmp.20ShuOAzOa" 00:28:28.246 } 00:28:28.246 } 00:28:28.246 ] 00:28:28.246 }, 00:28:28.246 { 00:28:28.246 "subsystem": "iobuf", 00:28:28.246 "config": [ 00:28:28.246 { 00:28:28.246 "method": "iobuf_set_options", 00:28:28.246 "params": { 00:28:28.246 "small_pool_count": 8192, 00:28:28.246 "large_pool_count": 1024, 00:28:28.246 "small_bufsize": 8192, 00:28:28.246 "large_bufsize": 135168, 00:28:28.246 "enable_numa": false 00:28:28.246 } 00:28:28.246 } 00:28:28.246 ] 00:28:28.246 }, 00:28:28.246 { 00:28:28.246 "subsystem": "sock", 00:28:28.246 "config": [ 00:28:28.246 { 00:28:28.246 "method": "sock_set_default_impl", 00:28:28.246 "params": { 00:28:28.246 "impl_name": "uring" 00:28:28.246 } 00:28:28.246 }, 00:28:28.246 { 00:28:28.246 "method": "sock_impl_set_options", 00:28:28.246 "params": { 00:28:28.246 "impl_name": "ssl", 00:28:28.246 "recv_buf_size": 4096, 00:28:28.246 "send_buf_size": 4096, 00:28:28.246 "enable_recv_pipe": true, 00:28:28.246 "enable_quickack": false, 00:28:28.246 "enable_placement_id": 0, 00:28:28.246 "enable_zerocopy_send_server": true, 00:28:28.246 "enable_zerocopy_send_client": false, 00:28:28.246 "zerocopy_threshold": 0, 00:28:28.246 "tls_version": 0, 00:28:28.246 "enable_ktls": false 00:28:28.246 } 00:28:28.246 }, 00:28:28.246 { 00:28:28.246 "method": "sock_impl_set_options", 00:28:28.246 "params": { 00:28:28.246 "impl_name": "posix", 00:28:28.246 "recv_buf_size": 2097152, 00:28:28.246 "send_buf_size": 2097152, 00:28:28.246 "enable_recv_pipe": true, 00:28:28.246 "enable_quickack": false, 00:28:28.246 "enable_placement_id": 0, 00:28:28.246 "enable_zerocopy_send_server": true, 00:28:28.246 "enable_zerocopy_send_client": false, 00:28:28.246 "zerocopy_threshold": 0, 00:28:28.246 "tls_version": 0, 00:28:28.246 "enable_ktls": false 00:28:28.246 } 00:28:28.246 }, 00:28:28.246 { 00:28:28.247 "method": "sock_impl_set_options", 00:28:28.247 "params": { 00:28:28.247 "impl_name": "uring", 00:28:28.247 "recv_buf_size": 2097152, 00:28:28.247 "send_buf_size": 2097152, 00:28:28.247 "enable_recv_pipe": true, 00:28:28.247 "enable_quickack": false, 00:28:28.247 "enable_placement_id": 0, 00:28:28.247 "enable_zerocopy_send_server": false, 00:28:28.247 "enable_zerocopy_send_client": false, 00:28:28.247 "zerocopy_threshold": 0, 00:28:28.247 "tls_version": 0, 00:28:28.247 "enable_ktls": false 00:28:28.247 } 00:28:28.247 } 00:28:28.247 ] 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "subsystem": "vmd", 00:28:28.247 "config": [] 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "subsystem": "accel", 00:28:28.247 "config": [ 00:28:28.247 { 00:28:28.247 "method": "accel_set_options", 00:28:28.247 "params": { 00:28:28.247 "small_cache_size": 128, 00:28:28.247 "large_cache_size": 16, 00:28:28.247 "task_count": 2048, 00:28:28.247 "sequence_count": 2048, 00:28:28.247 "buf_count": 2048 00:28:28.247 } 00:28:28.247 } 00:28:28.247 ] 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "subsystem": "bdev", 00:28:28.247 "config": [ 00:28:28.247 { 00:28:28.247 "method": "bdev_set_options", 00:28:28.247 "params": { 00:28:28.247 "bdev_io_pool_size": 65535, 00:28:28.247 "bdev_io_cache_size": 256, 00:28:28.247 "bdev_auto_examine": true, 00:28:28.247 "iobuf_small_cache_size": 128, 00:28:28.247 "iobuf_large_cache_size": 16 00:28:28.247 } 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "method": "bdev_raid_set_options", 00:28:28.247 "params": { 00:28:28.247 "process_window_size_kb": 1024, 00:28:28.247 "process_max_bandwidth_mb_sec": 0 00:28:28.247 } 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "method": "bdev_iscsi_set_options", 00:28:28.247 "params": { 00:28:28.247 "timeout_sec": 30 00:28:28.247 } 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "method": "bdev_nvme_set_options", 00:28:28.247 "params": { 00:28:28.247 "action_on_timeout": "none", 00:28:28.247 "timeout_us": 0, 00:28:28.247 "timeout_admin_us": 0, 00:28:28.247 "keep_alive_timeout_ms": 10000, 00:28:28.247 "arbitration_burst": 0, 00:28:28.247 "low_priority_weight": 0, 00:28:28.247 "medium_priority_weight": 0, 00:28:28.247 "high_priority_weight": 0, 00:28:28.247 "nvme_adminq_poll_period_us": 10000, 00:28:28.247 "nvme_ioq_poll_period_us": 0, 00:28:28.247 "io_queue_requests": 512, 00:28:28.247 "delay_cmd_submit": true, 00:28:28.247 "transport_retry_count": 4, 00:28:28.247 "bdev_retry_count": 3, 00:28:28.247 "transport_ack_timeout": 0, 00:28:28.247 "ctrlr_loss_timeout_sec": 0, 00:28:28.247 "reconnect_delay_sec": 0, 00:28:28.247 "fast_io_fail_timeout_sec": 0, 00:28:28.247 "disable_auto_failback": false, 00:28:28.247 "generate_uuids": false, 00:28:28.247 "transport_tos": 0, 00:28:28.247 "nvme_error_stat": false, 00:28:28.247 "rdma_srq_size": 0, 00:28:28.247 "io_path_stat": false, 00:28:28.247 "allow_accel_sequence": false, 00:28:28.247 "rdma_max_cq_size": 0, 00:28:28.247 "rdma_cm_event_timeout_ms": 0, 00:28:28.247 "dhchap_digests": [ 00:28:28.247 "sha256", 00:28:28.247 "sha384", 00:28:28.247 "sha512" 00:28:28.247 ], 00:28:28.247 "dhchap_dhgroups": [ 00:28:28.247 "null", 00:28:28.247 "ffdhe2048", 00:28:28.247 "ffdhe3072", 00:28:28.247 "ffdhe4096", 00:28:28.247 "ffdhe6144", 00:28:28.247 "ffdhe8192" 00:28:28.247 ], 00:28:28.247 "rdma_umr_per_io": false 00:28:28.247 } 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "method": "bdev_nvme_attach_controller", 00:28:28.247 "params": { 00:28:28.247 "name": "nvme0", 00:28:28.247 "trtype": "TCP", 00:28:28.247 "adrfam": "IPv4", 00:28:28.247 "traddr": "127.0.0.1", 00:28:28.247 "trsvcid": "4420", 00:28:28.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:28.247 "prchk_reftag": false, 00:28:28.247 "prchk_guard": false, 00:28:28.247 "ctrlr_loss_timeout_sec": 0, 00:28:28.247 "reconnect_delay_sec": 0, 00:28:28.247 "fast_io_fail_timeout_sec": 0, 00:28:28.247 "psk": "key0", 00:28:28.247 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:28.247 "hdgst": false, 00:28:28.247 "ddgst": false, 00:28:28.247 "multipath": "multipath" 00:28:28.247 } 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "method": "bdev_nvme_set_hotplug", 00:28:28.247 "params": { 00:28:28.247 "period_us": 100000, 00:28:28.247 "enable": false 00:28:28.247 } 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "method": "bdev_wait_for_examine" 00:28:28.247 } 00:28:28.247 ] 00:28:28.247 }, 00:28:28.247 { 00:28:28.247 "subsystem": "nbd", 00:28:28.247 "config": [] 00:28:28.247 } 00:28:28.247 ] 00:28:28.247 }' 00:28:28.247 05:46:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:28.506 [2024-12-16 05:46:08.560533] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:28.506 [2024-12-16 05:46:08.560735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93767 ] 00:28:28.506 [2024-12-16 05:46:08.736176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.765 [2024-12-16 05:46:08.819256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.025 [2024-12-16 05:46:09.048327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:29.025 [2024-12-16 05:46:09.161438] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:29.284 05:46:09 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.284 05:46:09 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:29.284 05:46:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:28:29.284 05:46:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:28:29.284 05:46:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.543 05:46:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:29.543 05:46:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:28:29.543 05:46:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:29.543 05:46:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.543 05:46:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:29.543 05:46:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.543 05:46:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.802 05:46:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:28:29.802 05:46:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:28:29.802 05:46:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:29.802 05:46:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.802 05:46:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:29.802 05:46:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.802 05:46:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.061 05:46:10 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:28:30.061 05:46:10 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:28:30.061 05:46:10 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:28:30.061 05:46:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:30.331 05:46:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:28:30.331 05:46:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:30.331 05:46:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LiLJNDeg9X /tmp/tmp.20ShuOAzOa 00:28:30.331 05:46:10 keyring_file -- keyring/file.sh@20 -- # killprocess 93767 00:28:30.331 05:46:10 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 93767 ']' 00:28:30.331 05:46:10 keyring_file -- common/autotest_common.sh@958 -- # kill -0 93767 00:28:30.331 05:46:10 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:30.332 05:46:10 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.332 05:46:10 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93767 00:28:30.332 killing process with pid 93767 00:28:30.332 Received shutdown signal, test time was about 1.000000 seconds 00:28:30.332 00:28:30.332 Latency(us) 00:28:30.332 [2024-12-16T05:46:10.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.332 [2024-12-16T05:46:10.591Z] =================================================================================================================== 00:28:30.332 [2024-12-16T05:46:10.591Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:30.332 05:46:10 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.332 05:46:10 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.332 05:46:10 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93767' 00:28:30.332 05:46:10 keyring_file -- common/autotest_common.sh@973 -- # kill 93767 00:28:30.332 05:46:10 keyring_file -- common/autotest_common.sh@978 -- # wait 93767 00:28:31.302 05:46:11 keyring_file -- keyring/file.sh@21 -- # killprocess 93496 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 93496 ']' 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@958 -- # kill -0 93496 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93496 00:28:31.302 killing process with pid 93496 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93496' 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@973 -- # kill 93496 00:28:31.302 05:46:11 keyring_file -- common/autotest_common.sh@978 -- # wait 93496 00:28:33.208 ************************************ 00:28:33.208 END TEST keyring_file 00:28:33.208 ************************************ 00:28:33.208 00:28:33.208 real 0m18.234s 00:28:33.208 user 0m42.825s 00:28:33.208 sys 0m2.929s 00:28:33.208 05:46:13 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.208 05:46:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:33.208 05:46:13 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:28:33.208 05:46:13 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:33.208 05:46:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:33.208 05:46:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.208 05:46:13 -- common/autotest_common.sh@10 -- # set +x 00:28:33.208 ************************************ 00:28:33.208 START TEST keyring_linux 00:28:33.208 ************************************ 00:28:33.208 05:46:13 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:33.208 Joined session keyring: 515350711 00:28:33.208 * Looking for test storage... 00:28:33.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:33.208 05:46:13 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:33.208 05:46:13 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:28:33.208 05:46:13 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:33.208 05:46:13 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:33.208 05:46:13 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.208 05:46:13 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.208 05:46:13 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@345 -- # : 1 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@368 -- # return 0 00:28:33.209 05:46:13 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.209 05:46:13 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:33.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.209 --rc genhtml_branch_coverage=1 00:28:33.209 --rc genhtml_function_coverage=1 00:28:33.209 --rc genhtml_legend=1 00:28:33.209 --rc geninfo_all_blocks=1 00:28:33.209 --rc geninfo_unexecuted_blocks=1 00:28:33.209 00:28:33.209 ' 00:28:33.209 05:46:13 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:33.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.209 --rc genhtml_branch_coverage=1 00:28:33.209 --rc genhtml_function_coverage=1 00:28:33.209 --rc genhtml_legend=1 00:28:33.209 --rc geninfo_all_blocks=1 00:28:33.209 --rc geninfo_unexecuted_blocks=1 00:28:33.209 00:28:33.209 ' 00:28:33.209 05:46:13 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:33.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.209 --rc genhtml_branch_coverage=1 00:28:33.209 --rc genhtml_function_coverage=1 00:28:33.209 --rc genhtml_legend=1 00:28:33.209 --rc geninfo_all_blocks=1 00:28:33.209 --rc geninfo_unexecuted_blocks=1 00:28:33.209 00:28:33.209 ' 00:28:33.209 05:46:13 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:33.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.209 --rc genhtml_branch_coverage=1 00:28:33.209 --rc genhtml_function_coverage=1 00:28:33.209 --rc genhtml_legend=1 00:28:33.209 --rc geninfo_all_blocks=1 00:28:33.209 --rc geninfo_unexecuted_blocks=1 00:28:33.209 00:28:33.209 ' 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=ccafdfa8-c1c5-4fda-89cf-286fc282eeec 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.209 05:46:13 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.209 05:46:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.209 05:46:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.209 05:46:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.209 05:46:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:33.209 05:46:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.209 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:33.209 /tmp/:spdk-test:key0 00:28:33.209 05:46:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:33.209 05:46:13 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:33.209 05:46:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:33.209 /tmp/:spdk-test:key1 00:28:33.210 05:46:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:33.210 05:46:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=93913 00:28:33.210 05:46:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 93913 00:28:33.210 05:46:13 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.210 05:46:13 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 93913 ']' 00:28:33.210 05:46:13 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.210 05:46:13 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.210 05:46:13 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.210 05:46:13 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.210 05:46:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:33.469 [2024-12-16 05:46:13.555959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:33.469 [2024-12-16 05:46:13.556142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93913 ] 00:28:33.728 [2024-12-16 05:46:13.735846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.728 [2024-12-16 05:46:13.815050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.987 [2024-12-16 05:46:13.989763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:34.246 05:46:14 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.246 05:46:14 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:34.246 05:46:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:34.246 05:46:14 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.246 05:46:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:34.246 [2024-12-16 05:46:14.439917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.246 null0 00:28:34.246 [2024-12-16 05:46:14.471909] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:34.246 [2024-12-16 05:46:14.472302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:34.246 05:46:14 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.247 05:46:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:34.247 1017835169 00:28:34.247 05:46:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:34.247 822135225 00:28:34.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:34.247 05:46:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=93927 00:28:34.247 05:46:14 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:34.247 05:46:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 93927 /var/tmp/bperf.sock 00:28:34.247 05:46:14 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 93927 ']' 00:28:34.247 05:46:14 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:34.247 05:46:14 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.247 05:46:14 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:34.247 05:46:14 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.247 05:46:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:34.505 [2024-12-16 05:46:14.611548] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:34.505 [2024-12-16 05:46:14.612010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93927 ] 00:28:34.764 [2024-12-16 05:46:14.789726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.764 [2024-12-16 05:46:14.878528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.330 05:46:15 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.330 05:46:15 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:35.330 05:46:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:35.330 05:46:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:35.899 05:46:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:35.899 05:46:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.158 [2024-12-16 05:46:16.204125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:36.158 05:46:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:36.158 05:46:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:36.417 [2024-12-16 05:46:16.526824] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:36.417 nvme0n1 00:28:36.417 05:46:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:36.417 05:46:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:36.417 05:46:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:36.417 05:46:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:36.417 05:46:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:36.417 05:46:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:36.676 05:46:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:36.676 05:46:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:36.676 05:46:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:36.676 05:46:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:36.676 05:46:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:36.676 05:46:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:36.676 05:46:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:36.935 05:46:17 keyring_linux -- keyring/linux.sh@25 -- # sn=1017835169 00:28:36.935 05:46:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:36.935 05:46:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:36.935 05:46:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 1017835169 == \1\0\1\7\8\3\5\1\6\9 ]] 00:28:36.935 05:46:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1017835169 00:28:36.935 05:46:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:36.935 05:46:17 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.193 Running I/O for 1 seconds... 00:28:38.129 10169.00 IOPS, 39.72 MiB/s 00:28:38.129 Latency(us) 00:28:38.129 [2024-12-16T05:46:18.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.129 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:38.129 nvme0n1 : 1.01 10180.94 39.77 0.00 0.00 12490.13 4379.00 18469.24 00:28:38.129 [2024-12-16T05:46:18.388Z] =================================================================================================================== 00:28:38.129 [2024-12-16T05:46:18.388Z] Total : 10180.94 39.77 0.00 0.00 12490.13 4379.00 18469.24 00:28:38.129 { 00:28:38.129 "results": [ 00:28:38.129 { 00:28:38.129 "job": "nvme0n1", 00:28:38.129 "core_mask": "0x2", 00:28:38.129 "workload": "randread", 00:28:38.129 "status": "finished", 00:28:38.129 "queue_depth": 128, 00:28:38.129 "io_size": 4096, 00:28:38.129 "runtime": 1.011498, 00:28:38.129 "iops": 10180.939556973914, 00:28:38.129 "mibps": 39.76929514442935, 00:28:38.129 "io_failed": 0, 00:28:38.129 "io_timeout": 0, 00:28:38.129 "avg_latency_us": 12490.130755133388, 00:28:38.129 "min_latency_us": 4378.996363636364, 00:28:38.129 "max_latency_us": 18469.236363636363 00:28:38.129 } 00:28:38.129 ], 00:28:38.129 "core_count": 1 00:28:38.129 } 00:28:38.129 05:46:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:38.129 05:46:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:38.388 05:46:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:38.388 05:46:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:38.388 05:46:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:38.388 05:46:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:38.388 05:46:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:38.388 05:46:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:38.647 05:46:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:38.647 05:46:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:38.647 05:46:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:38.647 05:46:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:38.647 05:46:18 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:28:38.647 05:46:18 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:38.647 05:46:18 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:38.647 05:46:18 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:38.647 05:46:18 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:38.647 05:46:18 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:38.647 05:46:18 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:38.647 05:46:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:38.907 [2024-12-16 05:46:19.048210] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:38.907 [2024-12-16 05:46:19.048720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:38.907 [2024-12-16 05:46:19.049697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:38.907 [2024-12-16 05:46:19.050677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:38.907 [2024-12-16 05:46:19.050729] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:38.907 [2024-12-16 05:46:19.050748] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:38.907 [2024-12-16 05:46:19.050761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:38.907 request: 00:28:38.907 { 00:28:38.907 "name": "nvme0", 00:28:38.907 "trtype": "tcp", 00:28:38.907 "traddr": "127.0.0.1", 00:28:38.907 "adrfam": "ipv4", 00:28:38.907 "trsvcid": "4420", 00:28:38.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.907 "prchk_reftag": false, 00:28:38.907 "prchk_guard": false, 00:28:38.907 "hdgst": false, 00:28:38.907 "ddgst": false, 00:28:38.907 "psk": ":spdk-test:key1", 00:28:38.907 "allow_unrecognized_csi": false, 00:28:38.907 "method": "bdev_nvme_attach_controller", 00:28:38.907 "req_id": 1 00:28:38.907 } 00:28:38.907 Got JSON-RPC error response 00:28:38.907 response: 00:28:38.907 { 00:28:38.907 "code": -5, 00:28:38.907 "message": "Input/output error" 00:28:38.907 } 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@33 -- # sn=1017835169 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1017835169 00:28:38.907 1 links removed 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@33 -- # sn=822135225 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 822135225 00:28:38.907 1 links removed 00:28:38.907 05:46:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 93927 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 93927 ']' 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 93927 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93927 00:28:38.907 killing process with pid 93927 00:28:38.907 Received shutdown signal, test time was about 1.000000 seconds 00:28:38.907 00:28:38.907 Latency(us) 00:28:38.907 [2024-12-16T05:46:19.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.907 [2024-12-16T05:46:19.166Z] =================================================================================================================== 00:28:38.907 [2024-12-16T05:46:19.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93927' 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@973 -- # kill 93927 00:28:38.907 05:46:19 keyring_linux -- common/autotest_common.sh@978 -- # wait 93927 00:28:39.845 05:46:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 93913 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 93913 ']' 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 93913 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93913 00:28:39.845 killing process with pid 93913 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93913' 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@973 -- # kill 93913 00:28:39.845 05:46:19 keyring_linux -- common/autotest_common.sh@978 -- # wait 93913 00:28:41.751 00:28:41.751 real 0m8.473s 00:28:41.751 user 0m15.274s 00:28:41.751 sys 0m1.519s 00:28:41.751 05:46:21 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.751 ************************************ 00:28:41.751 END TEST keyring_linux 00:28:41.751 ************************************ 00:28:41.751 05:46:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:41.751 05:46:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:41.751 05:46:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:41.751 05:46:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:41.751 05:46:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:41.751 05:46:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:41.751 05:46:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:41.751 05:46:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:41.751 05:46:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.751 05:46:21 -- common/autotest_common.sh@10 -- # set +x 00:28:41.751 05:46:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:41.751 05:46:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:41.751 05:46:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:41.751 05:46:21 -- common/autotest_common.sh@10 -- # set +x 00:28:43.129 INFO: APP EXITING 00:28:43.129 INFO: killing all VMs 00:28:43.129 INFO: killing vhost app 00:28:43.129 INFO: EXIT DONE 00:28:43.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.955 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:43.955 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:44.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:44.522 Cleaning 00:28:44.522 Removing: /var/run/dpdk/spdk0/config 00:28:44.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:44.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:44.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:44.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:44.522 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:44.522 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:44.522 Removing: /var/run/dpdk/spdk1/config 00:28:44.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:44.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:44.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:44.522 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:44.522 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:44.522 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:44.522 Removing: /var/run/dpdk/spdk2/config 00:28:44.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:44.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:44.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:44.522 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:44.522 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:44.522 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:44.522 Removing: /var/run/dpdk/spdk3/config 00:28:44.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:44.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:44.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:44.522 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:44.523 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:44.523 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:44.523 Removing: /var/run/dpdk/spdk4/config 00:28:44.523 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:44.782 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:44.782 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:44.782 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:44.782 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:44.782 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:44.782 Removing: /dev/shm/nvmf_trace.0 00:28:44.782 Removing: /dev/shm/spdk_tgt_trace.pid59298 00:28:44.782 Removing: /var/run/dpdk/spdk0 00:28:44.782 Removing: /var/run/dpdk/spdk1 00:28:44.782 Removing: /var/run/dpdk/spdk2 00:28:44.782 Removing: /var/run/dpdk/spdk3 00:28:44.782 Removing: /var/run/dpdk/spdk4 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59079 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59298 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59516 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59620 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59665 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59793 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59811 00:28:44.782 Removing: /var/run/dpdk/spdk_pid59970 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60173 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60339 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60432 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60539 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60650 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60747 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60792 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60823 00:28:44.782 Removing: /var/run/dpdk/spdk_pid60899 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61005 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61469 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61533 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61601 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61617 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61738 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61754 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61881 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61897 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61961 00:28:44.782 Removing: /var/run/dpdk/spdk_pid61979 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62038 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62056 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62232 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62269 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62358 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62709 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62728 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62771 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62796 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62824 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62855 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62886 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62908 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62945 00:28:44.782 Removing: /var/run/dpdk/spdk_pid62970 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63006 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63037 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63063 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63090 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63121 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63147 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63173 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63204 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63225 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63253 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63295 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63321 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63362 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63441 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63487 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63514 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63549 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63565 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63590 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63639 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63670 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63705 00:28:44.782 Removing: /var/run/dpdk/spdk_pid63727 00:28:45.042 Removing: /var/run/dpdk/spdk_pid63748 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63770 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63791 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63813 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63829 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63856 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63891 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63935 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63951 00:28:45.043 Removing: /var/run/dpdk/spdk_pid63986 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64013 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64027 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64085 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64104 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64143 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64162 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64182 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64201 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64215 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64235 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64254 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64274 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64368 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64455 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64610 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64656 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64713 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64739 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64768 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64794 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64847 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64880 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64963 00:28:45.043 Removing: /var/run/dpdk/spdk_pid64999 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65072 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65175 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65264 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65316 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65433 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65493 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65538 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65788 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65906 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65945 00:28:45.043 Removing: /var/run/dpdk/spdk_pid65977 00:28:45.043 Removing: /var/run/dpdk/spdk_pid66028 00:28:45.043 Removing: /var/run/dpdk/spdk_pid66068 00:28:45.043 Removing: /var/run/dpdk/spdk_pid66114 00:28:45.043 Removing: /var/run/dpdk/spdk_pid66157 00:28:45.043 Removing: /var/run/dpdk/spdk_pid66565 00:28:45.043 Removing: /var/run/dpdk/spdk_pid66604 00:28:45.043 Removing: /var/run/dpdk/spdk_pid66975 00:28:45.043 Removing: /var/run/dpdk/spdk_pid67457 00:28:45.043 Removing: /var/run/dpdk/spdk_pid67739 00:28:45.043 Removing: /var/run/dpdk/spdk_pid68681 00:28:45.043 Removing: /var/run/dpdk/spdk_pid69639 00:28:45.043 Removing: /var/run/dpdk/spdk_pid69774 00:28:45.043 Removing: /var/run/dpdk/spdk_pid69843 00:28:45.043 Removing: /var/run/dpdk/spdk_pid71330 00:28:45.043 Removing: /var/run/dpdk/spdk_pid71701 00:28:45.043 Removing: /var/run/dpdk/spdk_pid75449 00:28:45.043 Removing: /var/run/dpdk/spdk_pid75858 00:28:45.043 Removing: /var/run/dpdk/spdk_pid75972 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76123 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76158 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76199 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76234 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76358 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76506 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76695 00:28:45.043 Removing: /var/run/dpdk/spdk_pid76791 00:28:45.043 Removing: /var/run/dpdk/spdk_pid77005 00:28:45.043 Removing: /var/run/dpdk/spdk_pid77108 00:28:45.043 Removing: /var/run/dpdk/spdk_pid77214 00:28:45.043 Removing: /var/run/dpdk/spdk_pid77580 00:28:45.043 Removing: /var/run/dpdk/spdk_pid78014 00:28:45.043 Removing: /var/run/dpdk/spdk_pid78015 00:28:45.043 Removing: /var/run/dpdk/spdk_pid78016 00:28:45.043 Removing: /var/run/dpdk/spdk_pid78298 00:28:45.043 Removing: /var/run/dpdk/spdk_pid78573 00:28:45.043 Removing: /var/run/dpdk/spdk_pid78587 00:28:45.043 Removing: /var/run/dpdk/spdk_pid80939 00:28:45.043 Removing: /var/run/dpdk/spdk_pid81361 00:28:45.043 Removing: /var/run/dpdk/spdk_pid81370 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81709 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81724 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81749 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81784 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81790 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81880 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81889 00:28:45.302 Removing: /var/run/dpdk/spdk_pid81998 00:28:45.302 Removing: /var/run/dpdk/spdk_pid82005 00:28:45.302 Removing: /var/run/dpdk/spdk_pid82110 00:28:45.302 Removing: /var/run/dpdk/spdk_pid82119 00:28:45.302 Removing: /var/run/dpdk/spdk_pid82573 00:28:45.302 Removing: /var/run/dpdk/spdk_pid82615 00:28:45.302 Removing: /var/run/dpdk/spdk_pid82724 00:28:45.302 Removing: /var/run/dpdk/spdk_pid82795 00:28:45.302 Removing: /var/run/dpdk/spdk_pid83164 00:28:45.302 Removing: /var/run/dpdk/spdk_pid83371 00:28:45.302 Removing: /var/run/dpdk/spdk_pid83816 00:28:45.302 Removing: /var/run/dpdk/spdk_pid84386 00:28:45.302 Removing: /var/run/dpdk/spdk_pid85256 00:28:45.302 Removing: /var/run/dpdk/spdk_pid85934 00:28:45.302 Removing: /var/run/dpdk/spdk_pid85937 00:28:45.302 Removing: /var/run/dpdk/spdk_pid87966 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88033 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88101 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88176 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88306 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88373 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88440 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88501 00:28:45.302 Removing: /var/run/dpdk/spdk_pid88892 00:28:45.303 Removing: /var/run/dpdk/spdk_pid90108 00:28:45.303 Removing: /var/run/dpdk/spdk_pid90256 00:28:45.303 Removing: /var/run/dpdk/spdk_pid90504 00:28:45.303 Removing: /var/run/dpdk/spdk_pid91115 00:28:45.303 Removing: /var/run/dpdk/spdk_pid91274 00:28:45.303 Removing: /var/run/dpdk/spdk_pid91439 00:28:45.303 Removing: /var/run/dpdk/spdk_pid91536 00:28:45.303 Removing: /var/run/dpdk/spdk_pid91694 00:28:45.303 Removing: /var/run/dpdk/spdk_pid91803 00:28:45.303 Removing: /var/run/dpdk/spdk_pid92526 00:28:45.303 Removing: /var/run/dpdk/spdk_pid92567 00:28:45.303 Removing: /var/run/dpdk/spdk_pid92599 00:28:45.303 Removing: /var/run/dpdk/spdk_pid92959 00:28:45.303 Removing: /var/run/dpdk/spdk_pid92994 00:28:45.303 Removing: /var/run/dpdk/spdk_pid93025 00:28:45.303 Removing: /var/run/dpdk/spdk_pid93496 00:28:45.303 Removing: /var/run/dpdk/spdk_pid93509 00:28:45.303 Removing: /var/run/dpdk/spdk_pid93767 00:28:45.303 Removing: /var/run/dpdk/spdk_pid93913 00:28:45.303 Removing: /var/run/dpdk/spdk_pid93927 00:28:45.303 Clean 00:28:45.303 05:46:25 -- common/autotest_common.sh@1453 -- # return 0 00:28:45.303 05:46:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:45.303 05:46:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.303 05:46:25 -- common/autotest_common.sh@10 -- # set +x 00:28:45.303 05:46:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:45.303 05:46:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:45.303 05:46:25 -- common/autotest_common.sh@10 -- # set +x 00:28:45.561 05:46:25 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:45.561 05:46:25 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:45.561 05:46:25 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:45.561 05:46:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:45.561 05:46:25 -- spdk/autotest.sh@398 -- # hostname 00:28:45.561 05:46:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:45.819 geninfo: WARNING: invalid characters removed from testname! 00:29:12.368 05:46:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:12.368 05:46:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:14.310 05:46:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:17.597 05:46:57 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:19.500 05:46:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:22.034 05:47:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:24.567 05:47:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:24.567 05:47:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:24.567 05:47:04 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:24.567 05:47:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:24.567 05:47:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:24.567 05:47:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:24.567 + [[ -n 5252 ]] 00:29:24.567 + sudo kill 5252 00:29:24.835 [Pipeline] } 00:29:24.851 [Pipeline] // timeout 00:29:24.856 [Pipeline] } 00:29:24.870 [Pipeline] // stage 00:29:24.876 [Pipeline] } 00:29:24.890 [Pipeline] // catchError 00:29:24.899 [Pipeline] stage 00:29:24.901 [Pipeline] { (Stop VM) 00:29:24.914 [Pipeline] sh 00:29:25.195 + vagrant halt 00:29:28.482 ==> default: Halting domain... 00:29:35.059 [Pipeline] sh 00:29:35.333 + vagrant destroy -f 00:29:37.867 ==> default: Removing domain... 00:29:38.139 [Pipeline] sh 00:29:38.422 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:29:38.431 [Pipeline] } 00:29:38.446 [Pipeline] // stage 00:29:38.451 [Pipeline] } 00:29:38.465 [Pipeline] // dir 00:29:38.470 [Pipeline] } 00:29:38.484 [Pipeline] // wrap 00:29:38.490 [Pipeline] } 00:29:38.502 [Pipeline] // catchError 00:29:38.511 [Pipeline] stage 00:29:38.513 [Pipeline] { (Epilogue) 00:29:38.526 [Pipeline] sh 00:29:38.831 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:44.124 [Pipeline] catchError 00:29:44.126 [Pipeline] { 00:29:44.139 [Pipeline] sh 00:29:44.421 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:44.421 Artifacts sizes are good 00:29:44.430 [Pipeline] } 00:29:44.444 [Pipeline] // catchError 00:29:44.454 [Pipeline] archiveArtifacts 00:29:44.461 Archiving artifacts 00:29:44.585 [Pipeline] cleanWs 00:29:44.595 [WS-CLEANUP] Deleting project workspace... 00:29:44.595 [WS-CLEANUP] Deferred wipeout is used... 00:29:44.601 [WS-CLEANUP] done 00:29:44.603 [Pipeline] } 00:29:44.617 [Pipeline] // stage 00:29:44.622 [Pipeline] } 00:29:44.635 [Pipeline] // node 00:29:44.640 [Pipeline] End of Pipeline 00:29:44.684 Finished: SUCCESS